00:00:00.002 Started by upstream project "autotest-nightly-lts" build number 1904 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3165 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.205 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.205 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.240 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.252 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.261 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.261 > git config core.sparsecheckout # timeout=10 00:00:05.270 > git read-tree -mu HEAD # timeout=10 00:00:05.282 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.296 Commit message: "pool: fixes for VisualBuild class" 00:00:05.296 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.387 [Pipeline] Start of Pipeline 00:00:05.400 [Pipeline] library 00:00:05.402 Loading library shm_lib@master 00:00:05.402 Library shm_lib@master is cached. Copying from home. 00:00:05.416 [Pipeline] node 00:00:05.427 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.428 [Pipeline] { 00:00:05.436 [Pipeline] catchError 00:00:05.437 [Pipeline] { 00:00:05.447 [Pipeline] wrap 00:00:05.456 [Pipeline] { 00:00:05.462 [Pipeline] stage 00:00:05.463 [Pipeline] { (Prologue) 00:00:05.595 [Pipeline] sh 00:00:05.883 + logger -p user.info -t JENKINS-CI 00:00:05.900 [Pipeline] echo 00:00:05.901 Node: CYP9 00:00:05.907 [Pipeline] sh 00:00:06.206 [Pipeline] setCustomBuildProperty 00:00:06.215 [Pipeline] echo 00:00:06.216 Cleanup processes 00:00:06.220 [Pipeline] sh 00:00:06.503 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.503 2056342 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.517 [Pipeline] sh 00:00:06.803 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.803 ++ grep -v 'sudo pgrep' 00:00:06.803 ++ awk '{print $1}' 00:00:06.803 + sudo kill -9 00:00:06.803 + true 00:00:06.819 [Pipeline] cleanWs 00:00:06.829 [WS-CLEANUP] Deleting project workspace... 00:00:06.829 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.836 [WS-CLEANUP] done 00:00:06.839 [Pipeline] setCustomBuildProperty 00:00:06.851 [Pipeline] sh 00:00:07.136 + sudo git config --global --replace-all safe.directory '*' 00:00:07.189 [Pipeline] nodesByLabel 00:00:07.191 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.198 [Pipeline] httpRequest 00:00:07.202 HttpMethod: GET 00:00:07.202 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.206 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.216 Response Code: HTTP/1.1 200 OK 00:00:07.216 Success: Status code 200 is in the accepted range: 200,404 00:00:07.217 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.918 [Pipeline] sh 00:00:10.206 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.227 [Pipeline] httpRequest 00:00:10.232 HttpMethod: GET 00:00:10.233 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:10.234 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:10.255 Response Code: HTTP/1.1 200 OK 00:00:10.255 Success: Status code 200 is in the accepted range: 200,404 00:00:10.256 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:21.463 [Pipeline] sh 00:01:21.750 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:25.064 [Pipeline] sh 00:01:25.377 + git -C spdk log --oneline -n5 00:01:25.378 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:25.378 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:25.378 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:25.378 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:25.378 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:25.389 [Pipeline] } 00:01:25.404 [Pipeline] // stage 00:01:25.412 [Pipeline] stage 00:01:25.414 [Pipeline] { (Prepare) 00:01:25.430 [Pipeline] writeFile 00:01:25.447 [Pipeline] sh 00:01:25.732 + logger -p user.info -t JENKINS-CI 00:01:25.746 [Pipeline] sh 00:01:26.032 + logger -p user.info -t JENKINS-CI 00:01:26.045 [Pipeline] sh 00:01:26.332 + cat autorun-spdk.conf 00:01:26.332 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.332 SPDK_TEST_NVMF=1 00:01:26.332 SPDK_TEST_NVME_CLI=1 00:01:26.332 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.332 SPDK_TEST_NVMF_NICS=e810 00:01:26.332 SPDK_RUN_UBSAN=1 00:01:26.332 NET_TYPE=phy 00:01:26.340 RUN_NIGHTLY=1 00:01:26.344 [Pipeline] readFile 00:01:26.368 [Pipeline] withEnv 00:01:26.370 [Pipeline] { 00:01:26.382 [Pipeline] sh 00:01:26.668 + set -ex 00:01:26.668 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.668 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.668 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.668 ++ SPDK_TEST_NVMF=1 00:01:26.668 ++ SPDK_TEST_NVME_CLI=1 00:01:26.668 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.668 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.668 ++ SPDK_RUN_UBSAN=1 00:01:26.668 ++ NET_TYPE=phy 00:01:26.668 ++ RUN_NIGHTLY=1 00:01:26.668 + case $SPDK_TEST_NVMF_NICS in 00:01:26.668 + DRIVERS=ice 00:01:26.668 + [[ tcp == \r\d\m\a ]] 00:01:26.668 + [[ -n ice ]] 00:01:26.668 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.668 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.668 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.668 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.668 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.668 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.668 + true 00:01:26.668 + for D in $DRIVERS 00:01:26.668 + sudo modprobe ice 00:01:26.668 + exit 0 00:01:26.678 [Pipeline] } 00:01:26.697 [Pipeline] // withEnv 00:01:26.702 [Pipeline] } 00:01:26.744 [Pipeline] // stage 00:01:26.759 [Pipeline] catchError 00:01:26.762 [Pipeline] { 00:01:26.775 [Pipeline] timeout 00:01:26.776 Timeout set to expire in 50 min 00:01:26.777 [Pipeline] { 00:01:26.790 [Pipeline] stage 00:01:26.791 [Pipeline] { (Tests) 00:01:26.802 [Pipeline] sh 00:01:27.085 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.085 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.085 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.085 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.085 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.085 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.085 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.085 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.085 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.085 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.085 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.085 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.085 + source /etc/os-release 00:01:27.085 ++ NAME='Fedora Linux' 00:01:27.085 ++ VERSION='38 (Cloud Edition)' 00:01:27.085 ++ ID=fedora 00:01:27.085 ++ VERSION_ID=38 00:01:27.085 ++ VERSION_CODENAME= 00:01:27.085 ++ PLATFORM_ID=platform:f38 00:01:27.085 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.085 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.085 ++ LOGO=fedora-logo-icon 00:01:27.085 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.085 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.085 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.085 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.085 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.085 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.085 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.085 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.085 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.085 ++ SUPPORT_END=2024-05-14 00:01:27.085 ++ VARIANT='Cloud Edition' 00:01:27.085 ++ VARIANT_ID=cloud 00:01:27.085 + uname -a 00:01:27.085 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.085 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:30.390 Hugepages 00:01:30.390 node hugesize free / total 00:01:30.390 node0 1048576kB 0 / 0 00:01:30.390 node0 2048kB 0 / 0 00:01:30.390 node1 1048576kB 0 / 0 00:01:30.390 node1 2048kB 0 / 0 00:01:30.390 00:01:30.390 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.390 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:30.390 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:30.390 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:30.390 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:30.390 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:30.390 + rm -f /tmp/spdk-ld-path 00:01:30.390 + source autorun-spdk.conf 00:01:30.390 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.390 ++ SPDK_TEST_NVMF=1 00:01:30.390 ++ SPDK_TEST_NVME_CLI=1 00:01:30.390 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.390 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.390 ++ SPDK_RUN_UBSAN=1 00:01:30.390 ++ NET_TYPE=phy 00:01:30.390 ++ RUN_NIGHTLY=1 00:01:30.390 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.390 + [[ -n '' ]] 00:01:30.390 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.390 + for M in /var/spdk/build-*-manifest.txt 00:01:30.390 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.390 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.390 + for M in /var/spdk/build-*-manifest.txt 00:01:30.390 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.390 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.390 ++ uname 00:01:30.390 + [[ Linux == \L\i\n\u\x ]] 00:01:30.390 + sudo dmesg -T 00:01:30.390 + sudo dmesg --clear 00:01:30.390 + dmesg_pid=2057429 00:01:30.390 + [[ Fedora Linux == FreeBSD ]] 00:01:30.390 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.390 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.390 + sudo dmesg -Tw 00:01:30.390 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.390 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:30.390 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:30.390 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.390 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.390 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.390 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.390 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.390 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.390 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.390 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.390 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.390 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.390 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.390 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.390 Test configuration: 00:01:30.390 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.390 SPDK_TEST_NVMF=1 00:01:30.390 SPDK_TEST_NVME_CLI=1 00:01:30.390 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.390 SPDK_TEST_NVMF_NICS=e810 00:01:30.390 SPDK_RUN_UBSAN=1 00:01:30.390 NET_TYPE=phy 00:01:30.390 RUN_NIGHTLY=1 20:57:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:30.390 20:57:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.390 20:57:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.390 20:57:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.391 20:57:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.391 20:57:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.391 20:57:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.391 20:57:08 -- paths/export.sh@5 -- $ export PATH 00:01:30.391 20:57:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.391 20:57:08 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:30.391 20:57:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:30.391 20:57:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717873028.XXXXXX 00:01:30.391 20:57:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717873028.jm2KHP 00:01:30.391 20:57:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:30.391 20:57:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:30.391 20:57:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:30.391 20:57:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.391 20:57:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.391 20:57:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:30.391 20:57:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:30.391 20:57:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.391 20:57:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:30.391 20:57:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.391 20:57:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.391 20:57:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.391 20:57:08 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.391 Sat Jun 8 06:57:08 PM UTC 2024 00:01:30.391 20:57:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.391 LTS-43-g130b9406a 00:01:30.391 20:57:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:30.391 20:57:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.391 20:57:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.391 20:57:08 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:30.391 20:57:08 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:30.391 20:57:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.391 ************************************ 00:01:30.391 START TEST ubsan 00:01:30.391 ************************************ 00:01:30.391 20:57:08 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:30.391 using ubsan 00:01:30.391 00:01:30.391 real 0m0.000s 00:01:30.391 user 0m0.000s 00:01:30.391 sys 0m0.000s 00:01:30.391 20:57:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.391 20:57:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.391 ************************************ 00:01:30.391 END TEST ubsan 00:01:30.391 ************************************ 00:01:30.391 20:57:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.391 20:57:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.391 20:57:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.391 20:57:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:30.391 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:30.391 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.964 Using 'verbs' RDMA provider 00:01:46.452 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:58.692 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:58.692 Creating mk/config.mk...done. 00:01:58.692 Creating mk/cc.flags.mk...done. 00:01:58.692 Type 'make' to build. 00:01:58.692 20:57:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:58.692 20:57:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:58.692 20:57:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:58.692 20:57:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.692 ************************************ 00:01:58.692 START TEST make 00:01:58.692 ************************************ 00:01:58.692 20:57:35 -- common/autotest_common.sh@1104 -- $ make -j144 00:01:58.692 make[1]: Nothing to be done for 'all'. 00:02:06.849 The Meson build system 00:02:06.849 Version: 1.3.1 00:02:06.849 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:06.849 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:06.849 Build type: native build 00:02:06.849 Program cat found: YES (/usr/bin/cat) 00:02:06.849 Project name: DPDK 00:02:06.849 Project version: 23.11.0 00:02:06.849 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:06.849 C linker for the host machine: cc ld.bfd 2.39-16 00:02:06.849 Host machine cpu family: x86_64 00:02:06.849 Host machine cpu: x86_64 00:02:06.849 Message: ## Building in Developer Mode ## 00:02:06.849 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.849 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.849 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.849 Program python3 found: YES (/usr/bin/python3) 00:02:06.849 Program cat found: YES (/usr/bin/cat) 00:02:06.849 Compiler for C supports arguments -march=native: YES 00:02:06.849 Checking for size of "void *" : 8 00:02:06.849 Checking for size of "void *" : 8 (cached) 00:02:06.849 Library m found: YES 00:02:06.849 Library numa found: YES 00:02:06.849 Has header "numaif.h" : YES 00:02:06.849 Library fdt found: NO 00:02:06.849 Library execinfo found: NO 00:02:06.849 Has header "execinfo.h" : YES 00:02:06.849 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:06.849 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.849 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.849 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.849 Run-time dependency openssl found: YES 3.0.9 00:02:06.849 Run-time dependency libpcap found: YES 1.10.4 00:02:06.849 Has header "pcap.h" with dependency libpcap: YES 00:02:06.849 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.849 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.849 Compiler for C supports arguments -Wformat: YES 00:02:06.849 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.849 Compiler for C supports arguments -Wformat-security: NO 00:02:06.849 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.849 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.849 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.849 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.849 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.849 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.849 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.850 Compiler for C supports arguments -Wundef: YES 00:02:06.850 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.850 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.850 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.850 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.850 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.850 Program objdump found: YES (/usr/bin/objdump) 00:02:06.850 Compiler for C supports arguments -mavx512f: YES 00:02:06.850 Checking if "AVX512 checking" compiles: YES 00:02:06.850 Fetching value of define "__SSE4_2__" : 1 00:02:06.850 Fetching value of define "__AES__" : 1 00:02:06.850 Fetching value of define "__AVX__" : 1 00:02:06.850 Fetching value of define "__AVX2__" : 1 00:02:06.850 Fetching value of define "__AVX512BW__" : 1 00:02:06.850 Fetching value of define "__AVX512CD__" : 1 00:02:06.850 Fetching value of define "__AVX512DQ__" : 1 00:02:06.850 Fetching value of define "__AVX512F__" : 1 00:02:06.850 Fetching value of define "__AVX512VL__" : 1 00:02:06.850 Fetching value of define "__PCLMUL__" : 1 00:02:06.850 Fetching value of define "__RDRND__" : 1 00:02:06.850 Fetching value of define "__RDSEED__" : 1 00:02:06.850 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:06.850 Fetching value of define "__znver1__" : (undefined) 00:02:06.850 Fetching value of define "__znver2__" : (undefined) 00:02:06.850 Fetching value of define "__znver3__" : (undefined) 00:02:06.850 Fetching value of define "__znver4__" : (undefined) 00:02:06.850 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.850 Message: lib/log: Defining dependency "log" 00:02:06.850 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.850 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.850 Checking for function "getentropy" : NO 00:02:06.850 Message: lib/eal: Defining dependency "eal" 00:02:06.850 Message: lib/ring: Defining dependency "ring" 00:02:06.850 Message: lib/rcu: Defining dependency "rcu" 00:02:06.850 Message: lib/mempool: Defining dependency "mempool" 00:02:06.850 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.850 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.850 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.850 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.850 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.850 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.850 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:06.850 Compiler for C supports arguments -mpclmul: YES 00:02:06.850 Compiler for C supports arguments -maes: YES 00:02:06.850 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.850 Compiler for C supports arguments -mavx512bw: YES 00:02:06.850 Compiler for C supports arguments -mavx512dq: YES 00:02:06.850 Compiler for C supports arguments -mavx512vl: YES 00:02:06.850 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.850 Compiler for C supports arguments -mavx2: YES 00:02:06.850 Compiler for C supports arguments -mavx: YES 00:02:06.850 Message: lib/net: Defining dependency "net" 00:02:06.850 Message: lib/meter: Defining dependency "meter" 00:02:06.850 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.850 Message: lib/pci: Defining dependency "pci" 00:02:06.850 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.850 Message: lib/hash: Defining dependency "hash" 00:02:06.850 Message: lib/timer: Defining dependency "timer" 00:02:06.850 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.850 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.850 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.850 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.850 Message: lib/power: Defining dependency "power" 00:02:06.850 Message: lib/reorder: Defining dependency "reorder" 00:02:06.850 Message: lib/security: Defining dependency "security" 00:02:06.850 Has header "linux/userfaultfd.h" : YES 00:02:06.850 Has header "linux/vduse.h" : YES 00:02:06.850 Message: lib/vhost: Defining dependency "vhost" 00:02:06.850 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.850 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.850 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.850 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.850 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.850 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.850 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.850 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.850 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.850 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.850 Program doxygen found: YES (/usr/bin/doxygen) 00:02:06.850 Configuring doxy-api-html.conf using configuration 00:02:06.850 Configuring doxy-api-man.conf using configuration 00:02:06.850 Program mandb found: YES (/usr/bin/mandb) 00:02:06.850 Program sphinx-build found: NO 00:02:06.850 Configuring rte_build_config.h using configuration 00:02:06.850 Message: 00:02:06.850 ================= 00:02:06.850 Applications Enabled 00:02:06.850 ================= 00:02:06.850 00:02:06.850 apps: 00:02:06.850 00:02:06.850 00:02:06.850 Message: 00:02:06.850 ================= 00:02:06.850 Libraries Enabled 00:02:06.850 ================= 00:02:06.850 00:02:06.850 libs: 00:02:06.850 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.850 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.850 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.850 00:02:06.850 Message: 00:02:06.850 =============== 00:02:06.850 Drivers Enabled 00:02:06.850 =============== 00:02:06.850 00:02:06.850 common: 00:02:06.850 00:02:06.850 bus: 00:02:06.850 pci, vdev, 00:02:06.850 mempool: 00:02:06.850 ring, 00:02:06.850 dma: 00:02:06.850 00:02:06.850 net: 00:02:06.850 00:02:06.850 crypto: 00:02:06.850 00:02:06.850 compress: 00:02:06.850 00:02:06.850 vdpa: 00:02:06.850 00:02:06.850 00:02:06.850 Message: 00:02:06.850 ================= 00:02:06.850 Content Skipped 00:02:06.850 ================= 00:02:06.850 00:02:06.850 apps: 00:02:06.850 dumpcap: explicitly disabled via build config 00:02:06.850 graph: explicitly disabled via build config 00:02:06.850 pdump: explicitly disabled via build config 00:02:06.850 proc-info: explicitly disabled via build config 00:02:06.850 test-acl: explicitly disabled via build config 00:02:06.850 test-bbdev: explicitly disabled via build config 00:02:06.850 test-cmdline: explicitly disabled via build config 00:02:06.850 test-compress-perf: explicitly disabled via build config 00:02:06.850 test-crypto-perf: explicitly disabled via build config 00:02:06.850 test-dma-perf: explicitly disabled via build config 00:02:06.850 test-eventdev: explicitly disabled via build config 00:02:06.850 test-fib: explicitly disabled via build config 00:02:06.850 test-flow-perf: explicitly disabled via build config 00:02:06.850 test-gpudev: explicitly disabled via build config 00:02:06.850 test-mldev: explicitly disabled via build config 00:02:06.850 test-pipeline: explicitly disabled via build config 00:02:06.850 test-pmd: explicitly disabled via build config 00:02:06.850 test-regex: explicitly disabled via build config 00:02:06.850 test-sad: explicitly disabled via build config 00:02:06.850 test-security-perf: explicitly disabled via build config 00:02:06.850 00:02:06.850 libs: 00:02:06.850 metrics: explicitly disabled via build config 00:02:06.850 acl: explicitly disabled via build config 00:02:06.850 bbdev: explicitly disabled via build config 00:02:06.850 bitratestats: explicitly disabled via build config 00:02:06.850 bpf: explicitly disabled via build config 00:02:06.850 cfgfile: explicitly disabled via build config 00:02:06.850 distributor: explicitly disabled via build config 00:02:06.850 efd: explicitly disabled via build config 00:02:06.850 eventdev: explicitly disabled via build config 00:02:06.850 dispatcher: explicitly disabled via build config 00:02:06.850 gpudev: explicitly disabled via build config 00:02:06.850 gro: explicitly disabled via build config 00:02:06.850 gso: explicitly disabled via build config 00:02:06.850 ip_frag: explicitly disabled via build config 00:02:06.850 jobstats: explicitly disabled via build config 00:02:06.850 latencystats: explicitly disabled via build config 00:02:06.850 lpm: explicitly disabled via build config 00:02:06.850 member: explicitly disabled via build config 00:02:06.850 pcapng: explicitly disabled via build config 00:02:06.850 rawdev: explicitly disabled via build config 00:02:06.850 regexdev: explicitly disabled via build config 00:02:06.850 mldev: explicitly disabled via build config 00:02:06.850 rib: explicitly disabled via build config 00:02:06.850 sched: explicitly disabled via build config 00:02:06.850 stack: explicitly disabled via build config 00:02:06.850 ipsec: explicitly disabled via build config 00:02:06.850 pdcp: explicitly disabled via build config 00:02:06.850 fib: explicitly disabled via build config 00:02:06.850 port: explicitly disabled via build config 00:02:06.850 pdump: explicitly disabled via build config 00:02:06.850 table: explicitly disabled via build config 00:02:06.850 pipeline: explicitly disabled via build config 00:02:06.850 graph: explicitly disabled via build config 00:02:06.850 node: explicitly disabled via build config 00:02:06.850 00:02:06.850 drivers: 00:02:06.850 common/cpt: not in enabled drivers build config 00:02:06.850 common/dpaax: not in enabled drivers build config 00:02:06.850 common/iavf: not in enabled drivers build config 00:02:06.850 common/idpf: not in enabled drivers build config 00:02:06.850 common/mvep: not in enabled drivers build config 00:02:06.850 common/octeontx: not in enabled drivers build config 00:02:06.850 bus/auxiliary: not in enabled drivers build config 00:02:06.850 bus/cdx: not in enabled drivers build config 00:02:06.850 bus/dpaa: not in enabled drivers build config 00:02:06.850 bus/fslmc: not in enabled drivers build config 00:02:06.850 bus/ifpga: not in enabled drivers build config 00:02:06.850 bus/platform: not in enabled drivers build config 00:02:06.850 bus/vmbus: not in enabled drivers build config 00:02:06.850 common/cnxk: not in enabled drivers build config 00:02:06.851 common/mlx5: not in enabled drivers build config 00:02:06.851 common/nfp: not in enabled drivers build config 00:02:06.851 common/qat: not in enabled drivers build config 00:02:06.851 common/sfc_efx: not in enabled drivers build config 00:02:06.851 mempool/bucket: not in enabled drivers build config 00:02:06.851 mempool/cnxk: not in enabled drivers build config 00:02:06.851 mempool/dpaa: not in enabled drivers build config 00:02:06.851 mempool/dpaa2: not in enabled drivers build config 00:02:06.851 mempool/octeontx: not in enabled drivers build config 00:02:06.851 mempool/stack: not in enabled drivers build config 00:02:06.851 dma/cnxk: not in enabled drivers build config 00:02:06.851 dma/dpaa: not in enabled drivers build config 00:02:06.851 dma/dpaa2: not in enabled drivers build config 00:02:06.851 dma/hisilicon: not in enabled drivers build config 00:02:06.851 dma/idxd: not in enabled drivers build config 00:02:06.851 dma/ioat: not in enabled drivers build config 00:02:06.851 dma/skeleton: not in enabled drivers build config 00:02:06.851 net/af_packet: not in enabled drivers build config 00:02:06.851 net/af_xdp: not in enabled drivers build config 00:02:06.851 net/ark: not in enabled drivers build config 00:02:06.851 net/atlantic: not in enabled drivers build config 00:02:06.851 net/avp: not in enabled drivers build config 00:02:06.851 net/axgbe: not in enabled drivers build config 00:02:06.851 net/bnx2x: not in enabled drivers build config 00:02:06.851 net/bnxt: not in enabled drivers build config 00:02:06.851 net/bonding: not in enabled drivers build config 00:02:06.851 net/cnxk: not in enabled drivers build config 00:02:06.851 net/cpfl: not in enabled drivers build config 00:02:06.851 net/cxgbe: not in enabled drivers build config 00:02:06.851 net/dpaa: not in enabled drivers build config 00:02:06.851 net/dpaa2: not in enabled drivers build config 00:02:06.851 net/e1000: not in enabled drivers build config 00:02:06.851 net/ena: not in enabled drivers build config 00:02:06.851 net/enetc: not in enabled drivers build config 00:02:06.851 net/enetfec: not in enabled drivers build config 00:02:06.851 net/enic: not in enabled drivers build config 00:02:06.851 net/failsafe: not in enabled drivers build config 00:02:06.851 net/fm10k: not in enabled drivers build config 00:02:06.851 net/gve: not in enabled drivers build config 00:02:06.851 net/hinic: not in enabled drivers build config 00:02:06.851 net/hns3: not in enabled drivers build config 00:02:06.851 net/i40e: not in enabled drivers build config 00:02:06.851 net/iavf: not in enabled drivers build config 00:02:06.851 net/ice: not in enabled drivers build config 00:02:06.851 net/idpf: not in enabled drivers build config 00:02:06.851 net/igc: not in enabled drivers build config 00:02:06.851 net/ionic: not in enabled drivers build config 00:02:06.851 net/ipn3ke: not in enabled drivers build config 00:02:06.851 net/ixgbe: not in enabled drivers build config 00:02:06.851 net/mana: not in enabled drivers build config 00:02:06.851 net/memif: not in enabled drivers build config 00:02:06.851 net/mlx4: not in enabled drivers build config 00:02:06.851 net/mlx5: not in enabled drivers build config 00:02:06.851 net/mvneta: not in enabled drivers build config 00:02:06.851 net/mvpp2: not in enabled drivers build config 00:02:06.851 net/netvsc: not in enabled drivers build config 00:02:06.851 net/nfb: not in enabled drivers build config 00:02:06.851 net/nfp: not in enabled drivers build config 00:02:06.851 net/ngbe: not in enabled drivers build config 00:02:06.851 net/null: not in enabled drivers build config 00:02:06.851 net/octeontx: not in enabled drivers build config 00:02:06.851 net/octeon_ep: not in enabled drivers build config 00:02:06.851 net/pcap: not in enabled drivers build config 00:02:06.851 net/pfe: not in enabled drivers build config 00:02:06.851 net/qede: not in enabled drivers build config 00:02:06.851 net/ring: not in enabled drivers build config 00:02:06.851 net/sfc: not in enabled drivers build config 00:02:06.851 net/softnic: not in enabled drivers build config 00:02:06.851 net/tap: not in enabled drivers build config 00:02:06.851 net/thunderx: not in enabled drivers build config 00:02:06.851 net/txgbe: not in enabled drivers build config 00:02:06.851 net/vdev_netvsc: not in enabled drivers build config 00:02:06.851 net/vhost: not in enabled drivers build config 00:02:06.851 net/virtio: not in enabled drivers build config 00:02:06.851 net/vmxnet3: not in enabled drivers build config 00:02:06.851 raw/*: missing internal dependency, "rawdev" 00:02:06.851 crypto/armv8: not in enabled drivers build config 00:02:06.851 crypto/bcmfs: not in enabled drivers build config 00:02:06.851 crypto/caam_jr: not in enabled drivers build config 00:02:06.851 crypto/ccp: not in enabled drivers build config 00:02:06.851 crypto/cnxk: not in enabled drivers build config 00:02:06.851 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.851 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.851 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.851 crypto/mlx5: not in enabled drivers build config 00:02:06.851 crypto/mvsam: not in enabled drivers build config 00:02:06.851 crypto/nitrox: not in enabled drivers build config 00:02:06.851 crypto/null: not in enabled drivers build config 00:02:06.851 crypto/octeontx: not in enabled drivers build config 00:02:06.851 crypto/openssl: not in enabled drivers build config 00:02:06.851 crypto/scheduler: not in enabled drivers build config 00:02:06.851 crypto/uadk: not in enabled drivers build config 00:02:06.851 crypto/virtio: not in enabled drivers build config 00:02:06.851 compress/isal: not in enabled drivers build config 00:02:06.851 compress/mlx5: not in enabled drivers build config 00:02:06.851 compress/octeontx: not in enabled drivers build config 00:02:06.851 compress/zlib: not in enabled drivers build config 00:02:06.851 regex/*: missing internal dependency, "regexdev" 00:02:06.851 ml/*: missing internal dependency, "mldev" 00:02:06.851 vdpa/ifc: not in enabled drivers build config 00:02:06.851 vdpa/mlx5: not in enabled drivers build config 00:02:06.851 vdpa/nfp: not in enabled drivers build config 00:02:06.851 vdpa/sfc: not in enabled drivers build config 00:02:06.851 event/*: missing internal dependency, "eventdev" 00:02:06.851 baseband/*: missing internal dependency, "bbdev" 00:02:06.851 gpu/*: missing internal dependency, "gpudev" 00:02:06.851 00:02:06.851 00:02:06.851 Build targets in project: 84 00:02:06.851 00:02:06.851 DPDK 23.11.0 00:02:06.851 00:02:06.851 User defined options 00:02:06.851 buildtype : debug 00:02:06.851 default_library : shared 00:02:06.851 libdir : lib 00:02:06.851 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:06.851 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:06.851 c_link_args : 00:02:06.851 cpu_instruction_set: native 00:02:06.851 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:06.851 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:06.851 enable_docs : false 00:02:06.851 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.851 enable_kmods : false 00:02:06.851 tests : false 00:02:06.851 00:02:06.851 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.851 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:06.851 [1/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.851 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:06.851 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.851 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.851 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.851 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.851 [7/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.851 [8/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.851 [9/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.851 [10/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.851 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.851 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.851 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.851 [14/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.851 [15/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.851 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.851 [17/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.851 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.851 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.851 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.851 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.851 [22/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.122 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.122 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.122 [25/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.122 [26/264] Linking static target lib/librte_pci.a 00:02:07.122 [27/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.122 [28/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.122 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.122 [30/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.122 [31/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.122 [32/264] Linking static target lib/librte_kvargs.a 00:02:07.122 [33/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.122 [34/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.122 [35/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.122 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.122 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.122 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.122 [39/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.122 [40/264] Linking static target lib/librte_log.a 00:02:07.122 [41/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.122 [42/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.122 [43/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.122 [44/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.122 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.122 [46/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.122 [47/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.122 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:07.122 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.122 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.122 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.122 [52/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.122 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.122 [54/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.122 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.122 [56/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.122 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.383 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.383 [59/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.383 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.383 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.383 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.383 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.383 [64/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.383 [65/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.383 [66/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.383 [67/264] Linking static target lib/librte_rcu.a 00:02:07.383 [68/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:07.383 [69/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.383 [70/264] Linking static target lib/librte_ring.a 00:02:07.383 [71/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.383 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.383 [73/264] Linking static target lib/librte_meter.a 00:02:07.383 [74/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.383 [75/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.383 [76/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.383 [77/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.383 [78/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.383 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.383 [80/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.383 [81/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.383 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.383 [83/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.383 [84/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.383 [85/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.383 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.383 [87/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.383 [88/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.383 [89/264] Linking static target lib/librte_telemetry.a 00:02:07.383 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.383 [91/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.383 [92/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.383 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.383 [94/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.383 [95/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.383 [96/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.383 [97/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.383 [98/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.383 [99/264] Linking static target lib/librte_timer.a 00:02:07.383 [100/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.383 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.383 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:07.383 [103/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.383 [104/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.383 [105/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.383 [106/264] Linking static target lib/librte_cmdline.a 00:02:07.383 [107/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.383 [108/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.383 [109/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.383 [110/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.383 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.383 [112/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.383 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.383 [114/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.383 [115/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.383 [116/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.383 [117/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.383 [118/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.383 [119/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.383 [120/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.383 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.383 [122/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.383 [123/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.383 [124/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.383 [125/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.383 [126/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.644 [127/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.644 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.644 [129/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.644 [130/264] Linking static target lib/librte_power.a 00:02:07.644 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.644 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.644 [133/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.644 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.644 [135/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.644 [136/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.644 [137/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.644 [138/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.644 [139/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.644 [140/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.644 [141/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.644 [142/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.644 [143/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.644 [144/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.644 [145/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.644 [146/264] Linking static target lib/librte_dmadev.a 00:02:07.644 [147/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.644 [148/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.644 [149/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.644 [150/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.644 [151/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.644 [152/264] Linking static target lib/librte_mbuf.a 00:02:07.644 [153/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.644 [154/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.645 [155/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.645 [156/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.645 [157/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.645 [158/264] Linking static target drivers/librte_bus_vdev.a 00:02:07.645 [159/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.645 [160/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.645 [161/264] Linking static target lib/librte_net.a 00:02:07.645 [162/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.645 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.645 [164/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.645 [165/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.645 [166/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.645 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.645 [168/264] Linking static target lib/librte_compressdev.a 00:02:07.645 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.645 [170/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.645 [171/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.645 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.645 [173/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.645 [174/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.645 [175/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.645 [176/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.645 [177/264] Linking static target drivers/librte_bus_pci.a 00:02:07.645 [178/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.645 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.645 [180/264] Linking static target lib/librte_eal.a 00:02:07.645 [181/264] Linking static target lib/librte_mempool.a 00:02:07.645 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.645 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.645 [184/264] Linking static target lib/librte_reorder.a 00:02:07.645 [185/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.645 [186/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.645 [187/264] Linking static target lib/librte_security.a 00:02:07.905 [188/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.905 [189/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.905 [190/264] Linking static target lib/librte_cryptodev.a 00:02:07.905 [191/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.905 [192/264] Linking target lib/librte_log.so.24.0 00:02:07.905 [193/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.905 [194/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.905 [195/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.905 [196/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.905 [197/264] Linking static target drivers/librte_mempool_ring.a 00:02:07.905 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.905 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.905 [200/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.905 [201/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.905 [202/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.905 [203/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.905 [204/264] Linking static target lib/librte_hash.a 00:02:07.905 [205/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:07.905 [206/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.905 [207/264] Linking target lib/librte_kvargs.so.24.0 00:02:07.905 [208/264] Linking target lib/librte_telemetry.so.24.0 00:02:08.165 [209/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.165 [210/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:08.165 [211/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:08.165 [212/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.165 [213/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.426 [214/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.426 [215/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.426 [216/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.426 [217/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.426 [218/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.426 [219/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.687 [220/264] Linking static target lib/librte_ethdev.a 00:02:08.687 [221/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.687 [222/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.687 [223/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.630 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.630 [225/264] Linking static target lib/librte_vhost.a 00:02:09.892 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.809 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.396 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.340 [230/264] Linking target lib/librte_eal.so.24.0 00:02:19.340 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.340 [232/264] Linking target lib/librte_ring.so.24.0 00:02:19.340 [233/264] Linking target lib/librte_timer.so.24.0 00:02:19.340 [234/264] Linking target lib/librte_meter.so.24.0 00:02:19.340 [235/264] Linking target lib/librte_pci.so.24.0 00:02:19.340 [236/264] Linking target lib/librte_dmadev.so.24.0 00:02:19.340 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:19.601 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:19.601 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:19.601 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:19.601 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:19.601 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:19.601 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:19.601 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:19.601 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:19.601 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:19.601 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:19.862 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:19.862 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:19.862 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:19.862 [251/264] Linking target lib/librte_reorder.so.24.0 00:02:19.862 [252/264] Linking target lib/librte_net.so.24.0 00:02:19.862 [253/264] Linking target lib/librte_compressdev.so.24.0 00:02:19.862 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:20.124 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:20.124 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.124 [257/264] Linking target lib/librte_hash.so.24.0 00:02:20.124 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:20.124 [259/264] Linking target lib/librte_ethdev.so.24.0 00:02:20.124 [260/264] Linking target lib/librte_security.so.24.0 00:02:20.124 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:20.386 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:20.386 [263/264] Linking target lib/librte_power.so.24.0 00:02:20.386 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:20.386 INFO: autodetecting backend as ninja 00:02:20.386 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:21.373 CC lib/log/log.o 00:02:21.373 CC lib/log/log_flags.o 00:02:21.373 CC lib/log/log_deprecated.o 00:02:21.373 CC lib/ut_mock/mock.o 00:02:21.373 CC lib/ut/ut.o 00:02:21.373 LIB libspdk_ut_mock.a 00:02:21.373 LIB libspdk_log.a 00:02:21.373 LIB libspdk_ut.a 00:02:21.373 SO libspdk_ut_mock.so.5.0 00:02:21.373 SO libspdk_log.so.6.1 00:02:21.373 SO libspdk_ut.so.1.0 00:02:21.373 SYMLINK libspdk_ut_mock.so 00:02:21.635 SYMLINK libspdk_ut.so 00:02:21.635 SYMLINK libspdk_log.so 00:02:21.635 CXX lib/trace_parser/trace.o 00:02:21.635 CC lib/ioat/ioat.o 00:02:21.635 CC lib/util/base64.o 00:02:21.635 CC lib/util/bit_array.o 00:02:21.635 CC lib/util/cpuset.o 00:02:21.635 CC lib/util/crc16.o 00:02:21.635 CC lib/dma/dma.o 00:02:21.635 CC lib/util/crc32.o 00:02:21.897 CC lib/util/crc32c.o 00:02:21.897 CC lib/util/crc64.o 00:02:21.897 CC lib/util/crc32_ieee.o 00:02:21.897 CC lib/util/dif.o 00:02:21.897 CC lib/util/fd.o 00:02:21.897 CC lib/util/file.o 00:02:21.897 CC lib/util/hexlify.o 00:02:21.897 CC lib/util/iov.o 00:02:21.897 CC lib/util/pipe.o 00:02:21.897 CC lib/util/math.o 00:02:21.897 CC lib/util/strerror_tls.o 00:02:21.897 CC lib/util/string.o 00:02:21.897 CC lib/util/uuid.o 00:02:21.897 CC lib/util/fd_group.o 00:02:21.897 CC lib/util/xor.o 00:02:21.897 CC lib/util/zipf.o 00:02:21.897 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.897 CC lib/vfio_user/host/vfio_user.o 00:02:21.897 LIB libspdk_dma.a 00:02:21.897 SO libspdk_dma.so.3.0 00:02:22.159 LIB libspdk_ioat.a 00:02:22.159 SYMLINK libspdk_dma.so 00:02:22.159 SO libspdk_ioat.so.6.0 00:02:22.159 LIB libspdk_vfio_user.a 00:02:22.159 SYMLINK libspdk_ioat.so 00:02:22.159 SO libspdk_vfio_user.so.4.0 00:02:22.159 SYMLINK libspdk_vfio_user.so 00:02:22.159 LIB libspdk_util.a 00:02:22.421 SO libspdk_util.so.8.0 00:02:22.421 SYMLINK libspdk_util.so 00:02:22.681 LIB libspdk_trace_parser.a 00:02:22.681 SO libspdk_trace_parser.so.4.0 00:02:22.681 CC lib/json/json_util.o 00:02:22.681 CC lib/json/json_parse.o 00:02:22.681 CC lib/json/json_write.o 00:02:22.681 SYMLINK libspdk_trace_parser.so 00:02:22.681 CC lib/idxd/idxd.o 00:02:22.681 CC lib/idxd/idxd_user.o 00:02:22.681 CC lib/idxd/idxd_kernel.o 00:02:22.681 CC lib/vmd/vmd.o 00:02:22.681 CC lib/vmd/led.o 00:02:22.681 CC lib/rdma/common.o 00:02:22.681 CC lib/env_dpdk/env.o 00:02:22.681 CC lib/conf/conf.o 00:02:22.681 CC lib/rdma/rdma_verbs.o 00:02:22.681 CC lib/env_dpdk/memory.o 00:02:22.681 CC lib/env_dpdk/pci.o 00:02:22.681 CC lib/env_dpdk/init.o 00:02:22.681 CC lib/env_dpdk/threads.o 00:02:22.681 CC lib/env_dpdk/pci_ioat.o 00:02:22.681 CC lib/env_dpdk/pci_virtio.o 00:02:22.681 CC lib/env_dpdk/pci_vmd.o 00:02:22.681 CC lib/env_dpdk/pci_idxd.o 00:02:22.681 CC lib/env_dpdk/pci_event.o 00:02:22.681 CC lib/env_dpdk/sigbus_handler.o 00:02:22.681 CC lib/env_dpdk/pci_dpdk.o 00:02:22.681 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.681 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.941 LIB libspdk_conf.a 00:02:22.941 LIB libspdk_rdma.a 00:02:22.941 LIB libspdk_json.a 00:02:22.941 SO libspdk_conf.so.5.0 00:02:22.941 SO libspdk_rdma.so.5.0 00:02:22.941 SO libspdk_json.so.5.1 00:02:22.941 SYMLINK libspdk_conf.so 00:02:22.941 SYMLINK libspdk_rdma.so 00:02:23.202 SYMLINK libspdk_json.so 00:02:23.202 LIB libspdk_idxd.a 00:02:23.202 LIB libspdk_vmd.a 00:02:23.202 SO libspdk_idxd.so.11.0 00:02:23.202 SO libspdk_vmd.so.5.0 00:02:23.202 SYMLINK libspdk_idxd.so 00:02:23.202 SYMLINK libspdk_vmd.so 00:02:23.202 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.202 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.202 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.202 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.463 LIB libspdk_jsonrpc.a 00:02:23.725 SO libspdk_jsonrpc.so.5.1 00:02:23.725 SYMLINK libspdk_jsonrpc.so 00:02:23.986 LIB libspdk_env_dpdk.a 00:02:23.986 CC lib/rpc/rpc.o 00:02:23.987 SO libspdk_env_dpdk.so.13.0 00:02:24.248 SYMLINK libspdk_env_dpdk.so 00:02:24.248 LIB libspdk_rpc.a 00:02:24.248 SO libspdk_rpc.so.5.0 00:02:24.248 SYMLINK libspdk_rpc.so 00:02:24.510 CC lib/notify/notify.o 00:02:24.510 CC lib/trace/trace.o 00:02:24.510 CC lib/notify/notify_rpc.o 00:02:24.510 CC lib/trace/trace_flags.o 00:02:24.510 CC lib/trace/trace_rpc.o 00:02:24.510 CC lib/sock/sock.o 00:02:24.510 CC lib/sock/sock_rpc.o 00:02:24.772 LIB libspdk_notify.a 00:02:24.772 SO libspdk_notify.so.5.0 00:02:24.772 LIB libspdk_trace.a 00:02:24.772 SO libspdk_trace.so.9.0 00:02:24.772 SYMLINK libspdk_notify.so 00:02:24.772 SYMLINK libspdk_trace.so 00:02:24.772 LIB libspdk_sock.a 00:02:24.772 SO libspdk_sock.so.8.0 00:02:25.033 SYMLINK libspdk_sock.so 00:02:25.033 CC lib/thread/thread.o 00:02:25.033 CC lib/thread/iobuf.o 00:02:25.294 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.294 CC lib/nvme/nvme_ctrlr.o 00:02:25.294 CC lib/nvme/nvme_fabric.o 00:02:25.294 CC lib/nvme/nvme_ns_cmd.o 00:02:25.294 CC lib/nvme/nvme_ns.o 00:02:25.294 CC lib/nvme/nvme_pcie_common.o 00:02:25.294 CC lib/nvme/nvme_pcie.o 00:02:25.294 CC lib/nvme/nvme_qpair.o 00:02:25.294 CC lib/nvme/nvme.o 00:02:25.294 CC lib/nvme/nvme_quirks.o 00:02:25.294 CC lib/nvme/nvme_transport.o 00:02:25.294 CC lib/nvme/nvme_discovery.o 00:02:25.294 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.294 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.294 CC lib/nvme/nvme_tcp.o 00:02:25.294 CC lib/nvme/nvme_opal.o 00:02:25.295 CC lib/nvme/nvme_io_msg.o 00:02:25.295 CC lib/nvme/nvme_poll_group.o 00:02:25.295 CC lib/nvme/nvme_zns.o 00:02:25.295 CC lib/nvme/nvme_cuse.o 00:02:25.295 CC lib/nvme/nvme_vfio_user.o 00:02:25.295 CC lib/nvme/nvme_rdma.o 00:02:26.235 LIB libspdk_thread.a 00:02:26.235 SO libspdk_thread.so.9.0 00:02:26.497 SYMLINK libspdk_thread.so 00:02:26.497 LIB libspdk_nvme.a 00:02:26.497 SO libspdk_nvme.so.12.0 00:02:26.758 CC lib/accel/accel.o 00:02:26.758 CC lib/virtio/virtio.o 00:02:26.758 CC lib/accel/accel_rpc.o 00:02:26.758 CC lib/virtio/virtio_vhost_user.o 00:02:26.758 CC lib/accel/accel_sw.o 00:02:26.758 CC lib/virtio/virtio_vfio_user.o 00:02:26.758 CC lib/virtio/virtio_pci.o 00:02:26.758 CC lib/init/json_config.o 00:02:26.758 CC lib/init/subsystem.o 00:02:26.758 CC lib/blob/blobstore.o 00:02:26.758 CC lib/init/subsystem_rpc.o 00:02:26.758 CC lib/blob/request.o 00:02:26.758 CC lib/init/rpc.o 00:02:26.758 CC lib/blob/zeroes.o 00:02:26.758 CC lib/blob/blob_bs_dev.o 00:02:26.758 SYMLINK libspdk_nvme.so 00:02:27.020 LIB libspdk_init.a 00:02:27.020 SO libspdk_init.so.4.0 00:02:27.020 LIB libspdk_virtio.a 00:02:27.020 SO libspdk_virtio.so.6.0 00:02:27.020 SYMLINK libspdk_init.so 00:02:27.020 SYMLINK libspdk_virtio.so 00:02:27.281 CC lib/event/app.o 00:02:27.281 CC lib/event/reactor.o 00:02:27.281 CC lib/event/log_rpc.o 00:02:27.281 CC lib/event/scheduler_static.o 00:02:27.281 CC lib/event/app_rpc.o 00:02:27.543 LIB libspdk_accel.a 00:02:27.543 SO libspdk_accel.so.14.0 00:02:27.543 LIB libspdk_event.a 00:02:27.543 SYMLINK libspdk_accel.so 00:02:27.805 SO libspdk_event.so.12.0 00:02:27.805 SYMLINK libspdk_event.so 00:02:27.805 CC lib/bdev/bdev.o 00:02:27.805 CC lib/bdev/bdev_rpc.o 00:02:27.805 CC lib/bdev/bdev_zone.o 00:02:27.805 CC lib/bdev/part.o 00:02:27.805 CC lib/bdev/scsi_nvme.o 00:02:29.191 LIB libspdk_blob.a 00:02:29.191 SO libspdk_blob.so.10.1 00:02:29.191 SYMLINK libspdk_blob.so 00:02:29.452 CC lib/lvol/lvol.o 00:02:29.452 CC lib/blobfs/blobfs.o 00:02:29.452 CC lib/blobfs/tree.o 00:02:30.025 LIB libspdk_bdev.a 00:02:30.025 SO libspdk_bdev.so.14.0 00:02:30.025 LIB libspdk_blobfs.a 00:02:30.286 SO libspdk_blobfs.so.9.0 00:02:30.286 LIB libspdk_lvol.a 00:02:30.286 SYMLINK libspdk_bdev.so 00:02:30.286 SO libspdk_lvol.so.9.1 00:02:30.286 SYMLINK libspdk_blobfs.so 00:02:30.286 SYMLINK libspdk_lvol.so 00:02:30.286 CC lib/scsi/dev.o 00:02:30.286 CC lib/scsi/lun.o 00:02:30.545 CC lib/scsi/port.o 00:02:30.545 CC lib/scsi/scsi.o 00:02:30.545 CC lib/scsi/scsi_bdev.o 00:02:30.545 CC lib/scsi/scsi_pr.o 00:02:30.545 CC lib/scsi/task.o 00:02:30.545 CC lib/scsi/scsi_rpc.o 00:02:30.545 CC lib/ftl/ftl_core.o 00:02:30.545 CC lib/ublk/ublk.o 00:02:30.545 CC lib/ftl/ftl_init.o 00:02:30.545 CC lib/nbd/nbd.o 00:02:30.545 CC lib/ublk/ublk_rpc.o 00:02:30.545 CC lib/nbd/nbd_rpc.o 00:02:30.545 CC lib/ftl/ftl_layout.o 00:02:30.545 CC lib/nvmf/ctrlr.o 00:02:30.545 CC lib/ftl/ftl_debug.o 00:02:30.545 CC lib/nvmf/ctrlr_discovery.o 00:02:30.545 CC lib/ftl/ftl_io.o 00:02:30.545 CC lib/nvmf/ctrlr_bdev.o 00:02:30.545 CC lib/ftl/ftl_sb.o 00:02:30.545 CC lib/nvmf/subsystem.o 00:02:30.545 CC lib/ftl/ftl_l2p.o 00:02:30.545 CC lib/ftl/ftl_l2p_flat.o 00:02:30.545 CC lib/nvmf/nvmf.o 00:02:30.545 CC lib/nvmf/nvmf_rpc.o 00:02:30.545 CC lib/ftl/ftl_nv_cache.o 00:02:30.545 CC lib/ftl/ftl_band.o 00:02:30.545 CC lib/nvmf/transport.o 00:02:30.545 CC lib/ftl/ftl_band_ops.o 00:02:30.545 CC lib/nvmf/tcp.o 00:02:30.545 CC lib/ftl/ftl_writer.o 00:02:30.545 CC lib/nvmf/rdma.o 00:02:30.545 CC lib/ftl/ftl_rq.o 00:02:30.545 CC lib/ftl/ftl_reloc.o 00:02:30.545 CC lib/ftl/ftl_l2p_cache.o 00:02:30.545 CC lib/ftl/ftl_p2l.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.545 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:30.545 CC lib/ftl/utils/ftl_conf.o 00:02:30.545 CC lib/ftl/utils/ftl_md.o 00:02:30.545 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.545 CC lib/ftl/utils/ftl_property.o 00:02:30.545 CC lib/ftl/utils/ftl_mempool.o 00:02:30.545 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.545 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.545 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.545 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.545 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.545 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.545 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.545 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.545 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.545 CC lib/ftl/ftl_trace.o 00:02:30.545 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.545 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.545 CC lib/ftl/base/ftl_base_dev.o 00:02:30.805 LIB libspdk_nbd.a 00:02:30.805 SO libspdk_nbd.so.6.0 00:02:30.805 LIB libspdk_scsi.a 00:02:31.065 SO libspdk_scsi.so.8.0 00:02:31.065 SYMLINK libspdk_nbd.so 00:02:31.065 SYMLINK libspdk_scsi.so 00:02:31.065 LIB libspdk_ublk.a 00:02:31.065 SO libspdk_ublk.so.2.0 00:02:31.326 SYMLINK libspdk_ublk.so 00:02:31.326 CC lib/vhost/vhost.o 00:02:31.326 CC lib/vhost/vhost_rpc.o 00:02:31.326 CC lib/iscsi/conn.o 00:02:31.326 CC lib/vhost/vhost_scsi.o 00:02:31.326 CC lib/iscsi/init_grp.o 00:02:31.326 CC lib/iscsi/md5.o 00:02:31.326 CC lib/vhost/vhost_blk.o 00:02:31.326 CC lib/iscsi/iscsi.o 00:02:31.326 CC lib/vhost/rte_vhost_user.o 00:02:31.326 CC lib/iscsi/param.o 00:02:31.326 CC lib/iscsi/portal_grp.o 00:02:31.326 CC lib/iscsi/iscsi_subsystem.o 00:02:31.326 CC lib/iscsi/iscsi_rpc.o 00:02:31.326 CC lib/iscsi/tgt_node.o 00:02:31.326 CC lib/iscsi/task.o 00:02:31.326 LIB libspdk_ftl.a 00:02:31.586 SO libspdk_ftl.so.8.0 00:02:31.847 SYMLINK libspdk_ftl.so 00:02:32.108 LIB libspdk_vhost.a 00:02:32.108 LIB libspdk_nvmf.a 00:02:32.108 SO libspdk_vhost.so.7.1 00:02:32.369 SO libspdk_nvmf.so.17.0 00:02:32.369 SYMLINK libspdk_vhost.so 00:02:32.369 LIB libspdk_iscsi.a 00:02:32.369 SYMLINK libspdk_nvmf.so 00:02:32.369 SO libspdk_iscsi.so.7.0 00:02:32.630 SYMLINK libspdk_iscsi.so 00:02:32.892 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.154 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:33.154 CC module/blob/bdev/blob_bdev.o 00:02:33.154 CC module/accel/error/accel_error.o 00:02:33.154 CC module/accel/error/accel_error_rpc.o 00:02:33.154 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.154 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.154 CC module/accel/ioat/accel_ioat.o 00:02:33.154 CC module/accel/dsa/accel_dsa.o 00:02:33.154 CC module/accel/iaa/accel_iaa.o 00:02:33.154 CC module/sock/posix/posix.o 00:02:33.154 CC module/accel/dsa/accel_dsa_rpc.o 00:02:33.154 CC module/accel/ioat/accel_ioat_rpc.o 00:02:33.154 CC module/accel/iaa/accel_iaa_rpc.o 00:02:33.154 LIB libspdk_env_dpdk_rpc.a 00:02:33.154 SO libspdk_env_dpdk_rpc.so.5.0 00:02:33.154 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.154 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.154 LIB libspdk_scheduler_gscheduler.a 00:02:33.416 LIB libspdk_scheduler_dynamic.a 00:02:33.416 LIB libspdk_accel_error.a 00:02:33.416 SO libspdk_scheduler_gscheduler.so.3.0 00:02:33.416 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:33.416 LIB libspdk_accel_ioat.a 00:02:33.416 SO libspdk_scheduler_dynamic.so.3.0 00:02:33.416 LIB libspdk_accel_iaa.a 00:02:33.416 LIB libspdk_accel_dsa.a 00:02:33.416 SO libspdk_accel_error.so.1.0 00:02:33.416 LIB libspdk_blob_bdev.a 00:02:33.416 SO libspdk_accel_ioat.so.5.0 00:02:33.416 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.416 SO libspdk_accel_dsa.so.4.0 00:02:33.416 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.416 SO libspdk_accel_iaa.so.2.0 00:02:33.416 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.416 SO libspdk_blob_bdev.so.10.1 00:02:33.416 SYMLINK libspdk_accel_error.so 00:02:33.416 SYMLINK libspdk_accel_ioat.so 00:02:33.416 SYMLINK libspdk_accel_dsa.so 00:02:33.416 SYMLINK libspdk_accel_iaa.so 00:02:33.416 SYMLINK libspdk_blob_bdev.so 00:02:33.677 LIB libspdk_sock_posix.a 00:02:33.938 SO libspdk_sock_posix.so.5.0 00:02:33.938 CC module/bdev/error/vbdev_error.o 00:02:33.938 CC module/bdev/delay/vbdev_delay.o 00:02:33.938 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.938 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.938 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.938 CC module/bdev/malloc/bdev_malloc.o 00:02:33.938 CC module/bdev/raid/bdev_raid.o 00:02:33.938 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.938 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.938 CC module/bdev/gpt/gpt.o 00:02:33.938 CC module/bdev/nvme/bdev_nvme.o 00:02:33.938 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.938 CC module/bdev/null/bdev_null.o 00:02:33.938 CC module/bdev/null/bdev_null_rpc.o 00:02:33.938 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.938 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.938 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.938 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.938 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.938 CC module/bdev/raid/raid0.o 00:02:33.938 CC module/bdev/nvme/nvme_rpc.o 00:02:33.938 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.938 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.938 CC module/bdev/raid/raid1.o 00:02:33.938 CC module/bdev/raid/concat.o 00:02:33.938 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.938 CC module/bdev/nvme/vbdev_opal.o 00:02:33.938 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.938 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.938 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.938 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.938 CC module/bdev/ftl/bdev_ftl.o 00:02:33.938 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.938 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.938 CC module/bdev/split/vbdev_split.o 00:02:33.938 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.938 CC module/bdev/aio/bdev_aio.o 00:02:33.938 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.938 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.938 SYMLINK libspdk_sock_posix.so 00:02:33.938 LIB libspdk_blobfs_bdev.a 00:02:34.199 SO libspdk_blobfs_bdev.so.5.0 00:02:34.199 LIB libspdk_bdev_split.a 00:02:34.199 LIB libspdk_bdev_error.a 00:02:34.199 LIB libspdk_bdev_null.a 00:02:34.199 LIB libspdk_bdev_gpt.a 00:02:34.199 SO libspdk_bdev_error.so.5.0 00:02:34.199 LIB libspdk_bdev_ftl.a 00:02:34.199 LIB libspdk_bdev_passthru.a 00:02:34.199 SYMLINK libspdk_blobfs_bdev.so 00:02:34.199 SO libspdk_bdev_split.so.5.0 00:02:34.199 SO libspdk_bdev_null.so.5.0 00:02:34.199 SO libspdk_bdev_gpt.so.5.0 00:02:34.199 LIB libspdk_bdev_aio.a 00:02:34.199 SO libspdk_bdev_ftl.so.5.0 00:02:34.199 SO libspdk_bdev_passthru.so.5.0 00:02:34.199 SYMLINK libspdk_bdev_error.so 00:02:34.199 LIB libspdk_bdev_zone_block.a 00:02:34.199 SYMLINK libspdk_bdev_null.so 00:02:34.199 LIB libspdk_bdev_malloc.a 00:02:34.199 SO libspdk_bdev_aio.so.5.0 00:02:34.199 LIB libspdk_bdev_delay.a 00:02:34.199 LIB libspdk_bdev_iscsi.a 00:02:34.199 SYMLINK libspdk_bdev_split.so 00:02:34.199 SYMLINK libspdk_bdev_gpt.so 00:02:34.199 SO libspdk_bdev_iscsi.so.5.0 00:02:34.199 SO libspdk_bdev_zone_block.so.5.0 00:02:34.199 SO libspdk_bdev_malloc.so.5.0 00:02:34.199 SO libspdk_bdev_delay.so.5.0 00:02:34.199 SYMLINK libspdk_bdev_passthru.so 00:02:34.199 SYMLINK libspdk_bdev_ftl.so 00:02:34.199 SYMLINK libspdk_bdev_aio.so 00:02:34.199 SYMLINK libspdk_bdev_iscsi.so 00:02:34.199 LIB libspdk_bdev_lvol.a 00:02:34.199 SYMLINK libspdk_bdev_zone_block.so 00:02:34.199 SYMLINK libspdk_bdev_malloc.so 00:02:34.199 SYMLINK libspdk_bdev_delay.so 00:02:34.460 LIB libspdk_bdev_virtio.a 00:02:34.460 SO libspdk_bdev_lvol.so.5.0 00:02:34.460 SO libspdk_bdev_virtio.so.5.0 00:02:34.460 SYMLINK libspdk_bdev_lvol.so 00:02:34.460 SYMLINK libspdk_bdev_virtio.so 00:02:34.736 LIB libspdk_bdev_raid.a 00:02:34.736 SO libspdk_bdev_raid.so.5.0 00:02:34.736 SYMLINK libspdk_bdev_raid.so 00:02:35.734 LIB libspdk_bdev_nvme.a 00:02:35.734 SO libspdk_bdev_nvme.so.6.0 00:02:35.734 SYMLINK libspdk_bdev_nvme.so 00:02:36.307 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.307 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.307 CC module/event/subsystems/vmd/vmd.o 00:02:36.307 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.307 CC module/event/subsystems/sock/sock.o 00:02:36.307 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.307 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.307 LIB libspdk_event_sock.a 00:02:36.307 LIB libspdk_event_vmd.a 00:02:36.307 LIB libspdk_event_scheduler.a 00:02:36.307 LIB libspdk_event_iobuf.a 00:02:36.307 LIB libspdk_event_vhost_blk.a 00:02:36.307 SO libspdk_event_sock.so.4.0 00:02:36.307 SO libspdk_event_iobuf.so.2.0 00:02:36.307 SO libspdk_event_vmd.so.5.0 00:02:36.307 SO libspdk_event_scheduler.so.3.0 00:02:36.307 SO libspdk_event_vhost_blk.so.2.0 00:02:36.567 SYMLINK libspdk_event_sock.so 00:02:36.567 SYMLINK libspdk_event_iobuf.so 00:02:36.567 SYMLINK libspdk_event_scheduler.so 00:02:36.567 SYMLINK libspdk_event_vmd.so 00:02:36.567 SYMLINK libspdk_event_vhost_blk.so 00:02:36.567 CC module/event/subsystems/accel/accel.o 00:02:36.828 LIB libspdk_event_accel.a 00:02:36.828 SO libspdk_event_accel.so.5.0 00:02:36.828 SYMLINK libspdk_event_accel.so 00:02:37.088 CC module/event/subsystems/bdev/bdev.o 00:02:37.349 LIB libspdk_event_bdev.a 00:02:37.349 SO libspdk_event_bdev.so.5.0 00:02:37.349 SYMLINK libspdk_event_bdev.so 00:02:37.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.610 CC module/event/subsystems/scsi/scsi.o 00:02:37.610 CC module/event/subsystems/ublk/ublk.o 00:02:37.610 CC module/event/subsystems/nbd/nbd.o 00:02:37.872 LIB libspdk_event_ublk.a 00:02:37.872 LIB libspdk_event_nbd.a 00:02:37.872 LIB libspdk_event_scsi.a 00:02:37.872 SO libspdk_event_ublk.so.2.0 00:02:37.872 SO libspdk_event_nbd.so.5.0 00:02:37.872 SO libspdk_event_scsi.so.5.0 00:02:37.872 LIB libspdk_event_nvmf.a 00:02:37.872 SO libspdk_event_nvmf.so.5.0 00:02:37.872 SYMLINK libspdk_event_ublk.so 00:02:37.872 SYMLINK libspdk_event_nbd.so 00:02:37.872 SYMLINK libspdk_event_scsi.so 00:02:37.872 SYMLINK libspdk_event_nvmf.so 00:02:38.132 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.132 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.392 LIB libspdk_event_vhost_scsi.a 00:02:38.392 LIB libspdk_event_iscsi.a 00:02:38.392 SO libspdk_event_vhost_scsi.so.2.0 00:02:38.392 SO libspdk_event_iscsi.so.5.0 00:02:38.392 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.392 SYMLINK libspdk_event_iscsi.so 00:02:38.653 SO libspdk.so.5.0 00:02:38.653 SYMLINK libspdk.so 00:02:38.917 CC app/spdk_nvme_perf/perf.o 00:02:38.917 CC test/rpc_client/rpc_client_test.o 00:02:38.917 TEST_HEADER include/spdk/accel.h 00:02:38.917 TEST_HEADER include/spdk/accel_module.h 00:02:38.917 TEST_HEADER include/spdk/assert.h 00:02:38.917 TEST_HEADER include/spdk/barrier.h 00:02:38.917 CC app/spdk_top/spdk_top.o 00:02:38.917 CXX app/trace/trace.o 00:02:38.917 CC app/spdk_lspci/spdk_lspci.o 00:02:38.917 TEST_HEADER include/spdk/base64.h 00:02:38.917 TEST_HEADER include/spdk/bdev_module.h 00:02:38.917 TEST_HEADER include/spdk/bdev.h 00:02:38.917 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.917 TEST_HEADER include/spdk/bit_array.h 00:02:38.917 TEST_HEADER include/spdk/bit_pool.h 00:02:38.917 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.917 TEST_HEADER include/spdk/blobfs.h 00:02:38.917 TEST_HEADER include/spdk/blob.h 00:02:38.917 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.917 TEST_HEADER include/spdk/cpuset.h 00:02:38.918 CC app/spdk_nvme_discover/discovery_aer.o 00:02:38.918 TEST_HEADER include/spdk/crc16.h 00:02:38.918 CC app/spdk_nvme_identify/identify.o 00:02:38.918 TEST_HEADER include/spdk/crc32.h 00:02:38.918 TEST_HEADER include/spdk/dif.h 00:02:38.918 TEST_HEADER include/spdk/crc64.h 00:02:38.918 TEST_HEADER include/spdk/config.h 00:02:38.918 TEST_HEADER include/spdk/conf.h 00:02:38.918 TEST_HEADER include/spdk/dma.h 00:02:38.918 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.918 TEST_HEADER include/spdk/endian.h 00:02:38.918 CC app/trace_record/trace_record.o 00:02:38.918 TEST_HEADER include/spdk/env.h 00:02:38.918 TEST_HEADER include/spdk/event.h 00:02:38.918 CC app/spdk_tgt/spdk_tgt.o 00:02:38.918 TEST_HEADER include/spdk/fd_group.h 00:02:38.918 CC app/nvmf_tgt/nvmf_main.o 00:02:38.918 TEST_HEADER include/spdk/fd.h 00:02:38.918 TEST_HEADER include/spdk/file.h 00:02:38.918 TEST_HEADER include/spdk/ftl.h 00:02:38.918 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.918 TEST_HEADER include/spdk/hexlify.h 00:02:38.918 TEST_HEADER include/spdk/histogram_data.h 00:02:38.918 TEST_HEADER include/spdk/idxd.h 00:02:38.918 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.918 CC app/vhost/vhost.o 00:02:38.918 TEST_HEADER include/spdk/ioat.h 00:02:38.918 TEST_HEADER include/spdk/init.h 00:02:38.918 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:38.918 CC app/iscsi_tgt/iscsi_tgt.o 00:02:38.918 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.918 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.918 TEST_HEADER include/spdk/json.h 00:02:38.918 TEST_HEADER include/spdk/likely.h 00:02:38.918 TEST_HEADER include/spdk/lvol.h 00:02:38.918 TEST_HEADER include/spdk/memory.h 00:02:38.918 TEST_HEADER include/spdk/mmio.h 00:02:38.918 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.918 TEST_HEADER include/spdk/notify.h 00:02:38.918 TEST_HEADER include/spdk/nvme.h 00:02:38.918 TEST_HEADER include/spdk/nbd.h 00:02:38.918 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.918 TEST_HEADER include/spdk/log.h 00:02:38.918 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.918 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.918 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.918 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.918 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.918 TEST_HEADER include/spdk/nvmf.h 00:02:38.918 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.918 TEST_HEADER include/spdk/opal.h 00:02:38.918 TEST_HEADER include/spdk/opal_spec.h 00:02:38.918 TEST_HEADER include/spdk/pci_ids.h 00:02:38.918 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.918 TEST_HEADER include/spdk/queue.h 00:02:38.918 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.918 CC app/spdk_dd/spdk_dd.o 00:02:38.918 CC examples/ioat/perf/perf.o 00:02:38.918 TEST_HEADER include/spdk/rpc.h 00:02:38.918 TEST_HEADER include/spdk/pipe.h 00:02:38.918 TEST_HEADER include/spdk/scsi.h 00:02:38.918 TEST_HEADER include/spdk/sock.h 00:02:38.918 TEST_HEADER include/spdk/reduce.h 00:02:38.918 TEST_HEADER include/spdk/thread.h 00:02:38.918 TEST_HEADER include/spdk/string.h 00:02:38.918 TEST_HEADER include/spdk/scheduler.h 00:02:38.918 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.918 CC test/env/pci/pci_ut.o 00:02:38.918 TEST_HEADER include/spdk/stdinc.h 00:02:38.918 CC examples/ioat/verify/verify.o 00:02:38.918 TEST_HEADER include/spdk/tree.h 00:02:38.918 TEST_HEADER include/spdk/trace.h 00:02:38.918 TEST_HEADER include/spdk/ublk.h 00:02:38.918 CC test/env/memory/memory_ut.o 00:02:38.918 CC examples/idxd/perf/perf.o 00:02:38.918 CC test/event/event_perf/event_perf.o 00:02:38.918 TEST_HEADER include/spdk/uuid.h 00:02:38.918 TEST_HEADER include/spdk/trace_parser.h 00:02:38.918 CC test/env/vtophys/vtophys.o 00:02:38.918 TEST_HEADER include/spdk/util.h 00:02:38.918 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.918 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.918 CC test/nvme/compliance/nvme_compliance.o 00:02:38.918 TEST_HEADER include/spdk/version.h 00:02:38.918 CC test/nvme/reset/reset.o 00:02:38.918 CC examples/sock/hello_world/hello_sock.o 00:02:38.918 CC test/app/jsoncat/jsoncat.o 00:02:38.918 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.918 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.918 CC test/nvme/startup/startup.o 00:02:38.918 TEST_HEADER include/spdk/vhost.h 00:02:38.918 CC examples/nvme/hotplug/hotplug.o 00:02:38.918 TEST_HEADER include/spdk/vmd.h 00:02:38.918 CC test/nvme/connect_stress/connect_stress.o 00:02:38.918 CC test/event/reactor_perf/reactor_perf.o 00:02:38.918 TEST_HEADER include/spdk/zipf.h 00:02:38.918 TEST_HEADER include/spdk/xor.h 00:02:38.918 CXX test/cpp_headers/accel_module.o 00:02:38.918 CXX test/cpp_headers/base64.o 00:02:38.918 CC test/nvme/aer/aer.o 00:02:38.918 CXX test/cpp_headers/barrier.o 00:02:38.918 CXX test/cpp_headers/assert.o 00:02:38.918 CXX test/cpp_headers/accel.o 00:02:38.918 CC test/nvme/fdp/fdp.o 00:02:38.918 CC test/dma/test_dma/test_dma.o 00:02:38.918 CC examples/thread/thread/thread_ex.o 00:02:38.918 CXX test/cpp_headers/bdev_module.o 00:02:38.918 CC examples/blob/cli/blobcli.o 00:02:38.918 CXX test/cpp_headers/bit_array.o 00:02:38.918 CXX test/cpp_headers/bdev_zone.o 00:02:38.918 CXX test/cpp_headers/bdev.o 00:02:38.918 CC examples/vmd/led/led.o 00:02:38.918 CC test/nvme/cuse/cuse.o 00:02:38.918 CXX test/cpp_headers/bit_pool.o 00:02:38.918 CC test/app/bdev_svc/bdev_svc.o 00:02:38.918 CC test/blobfs/mkfs/mkfs.o 00:02:38.918 CXX test/cpp_headers/blobfs.o 00:02:38.918 CXX test/cpp_headers/blob.o 00:02:38.918 CXX test/cpp_headers/conf.o 00:02:38.918 CXX test/cpp_headers/config.o 00:02:38.918 CXX test/cpp_headers/cpuset.o 00:02:38.918 CXX test/cpp_headers/crc16.o 00:02:38.918 CXX test/cpp_headers/crc64.o 00:02:38.918 CXX test/cpp_headers/dma.o 00:02:38.918 CXX test/cpp_headers/dif.o 00:02:38.918 CXX test/cpp_headers/endian.o 00:02:38.918 CXX test/cpp_headers/env.o 00:02:38.918 CC examples/blob/hello_world/hello_blob.o 00:02:38.918 CXX test/cpp_headers/blob_bdev.o 00:02:38.918 CXX test/cpp_headers/fd_group.o 00:02:38.918 CXX test/cpp_headers/fd.o 00:02:38.918 CXX test/cpp_headers/crc32.o 00:02:38.918 CC test/nvme/sgl/sgl.o 00:02:38.918 CC test/nvme/e2edp/nvme_dp.o 00:02:38.918 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.918 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.918 CXX test/cpp_headers/ftl.o 00:02:38.918 CXX test/cpp_headers/gpt_spec.o 00:02:38.918 CXX test/cpp_headers/hexlify.o 00:02:38.918 CC examples/nvme/hello_world/hello_world.o 00:02:38.918 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.918 CXX test/cpp_headers/histogram_data.o 00:02:39.203 CC test/app/histogram_perf/histogram_perf.o 00:02:39.203 CXX test/cpp_headers/env_dpdk.o 00:02:39.203 CXX test/cpp_headers/event.o 00:02:39.203 CXX test/cpp_headers/file.o 00:02:39.203 CXX test/cpp_headers/idxd_spec.o 00:02:39.203 CC test/event/reactor/reactor.o 00:02:39.203 CC test/event/app_repeat/app_repeat.o 00:02:39.203 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.203 CXX test/cpp_headers/init.o 00:02:39.203 CC test/nvme/err_injection/err_injection.o 00:02:39.203 CXX test/cpp_headers/ioat.o 00:02:39.203 CC examples/nvmf/nvmf/nvmf.o 00:02:39.203 CC test/nvme/overhead/overhead.o 00:02:39.203 CC test/thread/poller_perf/poller_perf.o 00:02:39.203 LINK spdk_lspci 00:02:39.203 CXX test/cpp_headers/idxd.o 00:02:39.203 CXX test/cpp_headers/likely.o 00:02:39.203 CXX test/cpp_headers/ioat_spec.o 00:02:39.203 CC examples/util/zipf/zipf.o 00:02:39.203 CXX test/cpp_headers/iscsi_spec.o 00:02:39.203 CXX test/cpp_headers/json.o 00:02:39.203 CXX test/cpp_headers/memory.o 00:02:39.203 CXX test/cpp_headers/jsonrpc.o 00:02:39.203 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.203 CXX test/cpp_headers/notify.o 00:02:39.203 CC test/bdev/bdevio/bdevio.o 00:02:39.203 CXX test/cpp_headers/log.o 00:02:39.203 CXX test/cpp_headers/nvme.o 00:02:39.203 CXX test/cpp_headers/lvol.o 00:02:39.203 CC test/nvme/boot_partition/boot_partition.o 00:02:39.203 CC app/fio/bdev/fio_plugin.o 00:02:39.203 CXX test/cpp_headers/nvme_intel.o 00:02:39.203 CC test/app/stub/stub.o 00:02:39.203 CXX test/cpp_headers/mmio.o 00:02:39.203 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.203 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.203 CXX test/cpp_headers/nvme_spec.o 00:02:39.203 CXX test/cpp_headers/nbd.o 00:02:39.203 CC test/nvme/reserve/reserve.o 00:02:39.203 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.203 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.203 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.203 CXX test/cpp_headers/nvme_zns.o 00:02:39.203 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.204 CXX test/cpp_headers/opal.o 00:02:39.204 CXX test/cpp_headers/opal_spec.o 00:02:39.204 CXX test/cpp_headers/nvmf.o 00:02:39.204 CXX test/cpp_headers/pipe.o 00:02:39.204 CXX test/cpp_headers/queue.o 00:02:39.204 CXX test/cpp_headers/reduce.o 00:02:39.204 CXX test/cpp_headers/rpc.o 00:02:39.204 CXX test/cpp_headers/nvmf_spec.o 00:02:39.204 CXX test/cpp_headers/scheduler.o 00:02:39.204 CC test/accel/dif/dif.o 00:02:39.204 CXX test/cpp_headers/nvmf_transport.o 00:02:39.204 CXX test/cpp_headers/pci_ids.o 00:02:39.204 LINK rpc_client_test 00:02:39.204 CC examples/nvme/abort/abort.o 00:02:39.204 CC examples/nvme/arbitration/arbitration.o 00:02:39.204 CC examples/accel/perf/accel_perf.o 00:02:39.204 CC test/nvme/simple_copy/simple_copy.o 00:02:39.204 LINK nvmf_tgt 00:02:39.204 LINK spdk_tgt 00:02:39.204 CC examples/bdev/bdevperf/bdevperf.o 00:02:39.204 CXX test/cpp_headers/scsi.o 00:02:39.204 CC examples/nvme/reconnect/reconnect.o 00:02:39.204 CXX test/cpp_headers/scsi_spec.o 00:02:39.204 LINK jsoncat 00:02:39.204 LINK lsvmd 00:02:39.204 LINK spdk_trace_record 00:02:39.204 LINK ioat_perf 00:02:39.488 CC app/fio/nvme/fio_plugin.o 00:02:39.488 CC test/event/scheduler/scheduler.o 00:02:39.488 LINK reactor 00:02:39.488 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.488 LINK histogram_perf 00:02:39.488 LINK spdk_nvme_discover 00:02:39.488 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.488 LINK vtophys 00:02:39.488 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.488 CC test/lvol/esnap/esnap.o 00:02:39.488 LINK hello_sock 00:02:39.488 LINK app_repeat 00:02:39.488 CXX test/cpp_headers/sock.o 00:02:39.488 LINK poller_perf 00:02:39.488 LINK env_dpdk_post_init 00:02:39.488 LINK event_perf 00:02:39.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.488 LINK zipf 00:02:39.488 LINK cmb_copy 00:02:39.488 LINK iscsi_tgt 00:02:39.488 LINK err_injection 00:02:39.488 LINK reset 00:02:39.488 LINK spdk_trace 00:02:39.488 LINK stub 00:02:39.488 LINK doorbell_aers 00:02:39.488 LINK hello_blob 00:02:39.488 LINK interrupt_tgt 00:02:39.488 LINK nvme_compliance 00:02:39.488 LINK hello_bdev 00:02:39.488 CXX test/cpp_headers/stdinc.o 00:02:39.488 CXX test/cpp_headers/string.o 00:02:39.488 CXX test/cpp_headers/thread.o 00:02:39.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.488 CXX test/cpp_headers/trace.o 00:02:39.488 LINK idxd_perf 00:02:39.488 CXX test/cpp_headers/trace_parser.o 00:02:39.488 CXX test/cpp_headers/tree.o 00:02:39.488 CXX test/cpp_headers/ublk.o 00:02:39.488 LINK nvme_dp 00:02:39.488 LINK spdk_dd 00:02:39.488 CXX test/cpp_headers/util.o 00:02:39.488 LINK thread 00:02:39.488 CXX test/cpp_headers/uuid.o 00:02:39.488 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.488 LINK hotplug 00:02:39.488 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.488 CXX test/cpp_headers/version.o 00:02:39.488 CXX test/cpp_headers/vhost.o 00:02:39.488 CXX test/cpp_headers/vmd.o 00:02:39.488 CXX test/cpp_headers/xor.o 00:02:39.488 CXX test/cpp_headers/zipf.o 00:02:39.488 LINK aer 00:02:39.747 LINK overhead 00:02:39.747 LINK pci_ut 00:02:39.747 LINK simple_copy 00:02:39.747 LINK test_dma 00:02:39.747 LINK nvmf 00:02:39.747 LINK mkfs 00:02:39.747 LINK arbitration 00:02:39.747 LINK reactor_perf 00:02:39.747 LINK spdk_bdev 00:02:39.747 LINK vhost 00:02:40.007 LINK led 00:02:40.007 LINK bdev_svc 00:02:40.007 LINK nvme_manage 00:02:40.007 LINK connect_stress 00:02:40.007 LINK startup 00:02:40.007 LINK boot_partition 00:02:40.007 LINK vhost_fuzz 00:02:40.007 LINK spdk_nvme_perf 00:02:40.007 LINK nvme_fuzz 00:02:40.007 LINK verify 00:02:40.007 LINK pmr_persistence 00:02:40.007 LINK reserve 00:02:40.007 LINK fused_ordering 00:02:40.007 LINK hello_world 00:02:40.007 LINK fdp 00:02:40.007 LINK scheduler 00:02:40.007 LINK sgl 00:02:40.007 LINK mem_callbacks 00:02:40.268 LINK memory_ut 00:02:40.268 LINK dif 00:02:40.268 LINK abort 00:02:40.268 LINK reconnect 00:02:40.268 LINK cuse 00:02:40.268 LINK bdevio 00:02:40.268 LINK accel_perf 00:02:40.268 LINK blobcli 00:02:40.530 LINK spdk_nvme 00:02:40.530 LINK spdk_top 00:02:40.530 LINK bdevperf 00:02:40.530 LINK spdk_nvme_identify 00:02:41.103 LINK iscsi_fuzz 00:02:43.650 LINK esnap 00:02:43.650 00:02:43.650 real 0m45.818s 00:02:43.650 user 6m9.485s 00:02:43.650 sys 3m59.385s 00:02:43.650 20:58:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:43.650 20:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.650 ************************************ 00:02:43.650 END TEST make 00:02:43.650 ************************************ 00:02:43.650 20:58:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:43.650 20:58:21 -- nvmf/common.sh@7 -- # uname -s 00:02:43.650 20:58:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.650 20:58:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.650 20:58:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.650 20:58:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.650 20:58:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.650 20:58:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.650 20:58:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.650 20:58:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.650 20:58:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.650 20:58:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.650 20:58:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:43.650 20:58:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:43.650 20:58:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.650 20:58:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.650 20:58:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:43.650 20:58:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:43.650 20:58:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.650 20:58:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.650 20:58:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.650 20:58:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.650 20:58:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.650 20:58:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.650 20:58:21 -- paths/export.sh@5 -- # export PATH 00:02:43.650 20:58:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.650 20:58:21 -- nvmf/common.sh@46 -- # : 0 00:02:43.650 20:58:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:43.650 20:58:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:43.650 20:58:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:43.650 20:58:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.650 20:58:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.650 20:58:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:43.650 20:58:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:43.650 20:58:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:43.650 20:58:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.650 20:58:21 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.650 20:58:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.650 20:58:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.650 20:58:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.650 20:58:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.650 20:58:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.650 20:58:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.650 20:58:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.650 20:58:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.910 20:58:21 -- spdk/autotest.sh@48 -- # udevadm_pid=2100204 00:02:43.910 20:58:21 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:43.910 20:58:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.910 20:58:21 -- spdk/autotest.sh@54 -- # echo 2100206 00:02:43.910 20:58:21 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:43.910 20:58:21 -- spdk/autotest.sh@56 -- # echo 2100207 00:02:43.910 20:58:21 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:43.910 20:58:21 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:43.910 20:58:21 -- spdk/autotest.sh@60 -- # echo 2100208 00:02:43.910 20:58:21 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:43.910 20:58:21 -- spdk/autotest.sh@62 -- # echo 2100209 00:02:43.910 20:58:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.910 20:58:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:43.910 20:58:21 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:43.910 20:58:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:43.910 20:58:21 -- common/autotest_common.sh@10 -- # set +x 00:02:43.910 20:58:21 -- spdk/autotest.sh@70 -- # create_test_list 00:02:43.910 20:58:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:43.910 20:58:21 -- common/autotest_common.sh@10 -- # set +x 00:02:43.910 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:43.910 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:43.910 20:58:21 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:43.910 20:58:21 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.910 20:58:21 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.910 20:58:21 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:43.910 20:58:21 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.910 20:58:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:43.910 20:58:21 -- common/autotest_common.sh@1440 -- # uname 00:02:43.910 20:58:21 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:43.910 20:58:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:43.910 20:58:21 -- common/autotest_common.sh@1460 -- # uname 00:02:43.910 20:58:21 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:43.910 20:58:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:43.910 20:58:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:43.910 20:58:21 -- spdk/autotest.sh@83 -- # hash lcov 00:02:43.910 20:58:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:43.910 20:58:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:43.910 --rc lcov_branch_coverage=1 00:02:43.910 --rc lcov_function_coverage=1 00:02:43.911 --rc genhtml_branch_coverage=1 00:02:43.911 --rc genhtml_function_coverage=1 00:02:43.911 --rc genhtml_legend=1 00:02:43.911 --rc geninfo_all_blocks=1 00:02:43.911 ' 00:02:43.911 20:58:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:43.911 --rc lcov_branch_coverage=1 00:02:43.911 --rc lcov_function_coverage=1 00:02:43.911 --rc genhtml_branch_coverage=1 00:02:43.911 --rc genhtml_function_coverage=1 00:02:43.911 --rc genhtml_legend=1 00:02:43.911 --rc geninfo_all_blocks=1 00:02:43.911 ' 00:02:43.911 20:58:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:43.911 --rc lcov_branch_coverage=1 00:02:43.911 --rc lcov_function_coverage=1 00:02:43.911 --rc genhtml_branch_coverage=1 00:02:43.911 --rc genhtml_function_coverage=1 00:02:43.911 --rc genhtml_legend=1 00:02:43.911 --rc geninfo_all_blocks=1 00:02:43.911 --no-external' 00:02:43.911 20:58:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:43.911 --rc lcov_branch_coverage=1 00:02:43.911 --rc lcov_function_coverage=1 00:02:43.911 --rc genhtml_branch_coverage=1 00:02:43.911 --rc genhtml_function_coverage=1 00:02:43.911 --rc genhtml_legend=1 00:02:43.911 --rc geninfo_all_blocks=1 00:02:43.911 --no-external' 00:02:43.911 20:58:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:43.911 lcov: LCOV version 1.14 00:02:43.911 20:58:21 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:56.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:56.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:56.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:56.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:56.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:08.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:08.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.683 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:08.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.683 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:08.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.683 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.683 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:08.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.683 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:08.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:08.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:09.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:09.207 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:09.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:09.207 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:09.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:09.207 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:09.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:09.207 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:09.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:09.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:09.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:11.114 20:58:48 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:11.114 20:58:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:11.114 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:11.114 20:58:48 -- spdk/autotest.sh@102 -- # rm -f 00:03:11.114 20:58:48 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.417 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:14.417 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.417 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.677 20:58:52 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:14.677 20:58:52 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:14.677 20:58:52 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:14.677 20:58:52 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:14.677 20:58:52 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:14.677 20:58:52 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:14.677 20:58:52 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:14.677 20:58:52 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.677 20:58:52 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:14.677 20:58:52 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:14.677 20:58:52 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:14.677 20:58:52 -- spdk/autotest.sh@121 -- # grep -v p 00:03:14.677 20:58:52 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:14.677 20:58:52 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:14.677 20:58:52 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:14.677 20:58:52 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:14.677 20:58:52 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.939 No valid GPT data, bailing 00:03:14.939 20:58:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.939 20:58:52 -- scripts/common.sh@393 -- # pt= 00:03:14.939 20:58:52 -- scripts/common.sh@394 -- # return 1 00:03:14.939 20:58:52 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.939 1+0 records in 00:03:14.939 1+0 records out 00:03:14.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387457 s, 271 MB/s 00:03:14.939 20:58:52 -- spdk/autotest.sh@129 -- # sync 00:03:14.939 20:58:52 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.939 20:58:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.939 20:58:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:23.083 20:59:00 -- spdk/autotest.sh@135 -- # uname -s 00:03:23.083 20:59:00 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:23.083 20:59:00 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:23.083 20:59:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.083 20:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.083 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.083 ************************************ 00:03:23.083 START TEST setup.sh 00:03:23.083 ************************************ 00:03:23.083 20:59:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:23.083 * Looking for test storage... 00:03:23.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.083 20:59:00 -- setup/test-setup.sh@10 -- # uname -s 00:03:23.083 20:59:00 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:23.083 20:59:00 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:23.083 20:59:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.083 20:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.083 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.083 ************************************ 00:03:23.083 START TEST acl 00:03:23.083 ************************************ 00:03:23.083 20:59:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:23.083 * Looking for test storage... 00:03:23.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.083 20:59:00 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:23.083 20:59:00 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:23.083 20:59:00 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:23.083 20:59:00 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:23.083 20:59:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:23.083 20:59:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:23.083 20:59:00 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:23.083 20:59:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.083 20:59:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:23.083 20:59:00 -- setup/acl.sh@12 -- # devs=() 00:03:23.083 20:59:00 -- setup/acl.sh@12 -- # declare -a devs 00:03:23.083 20:59:00 -- setup/acl.sh@13 -- # drivers=() 00:03:23.083 20:59:00 -- setup/acl.sh@13 -- # declare -A drivers 00:03:23.083 20:59:00 -- setup/acl.sh@51 -- # setup reset 00:03:23.083 20:59:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.083 20:59:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.294 20:59:04 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:27.294 20:59:04 -- setup/acl.sh@16 -- # local dev driver 00:03:27.294 20:59:04 -- setup/acl.sh@15 -- # setup output status 00:03:27.294 20:59:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.294 20:59:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.294 20:59:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.841 Hugepages 00:03:29.841 node hugesize free / total 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 00:03:29.841 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.841 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.841 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.841 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.842 20:59:07 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.842 20:59:07 -- setup/acl.sh@20 -- # continue 00:03:29.842 20:59:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.842 20:59:07 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:29.842 20:59:07 -- setup/acl.sh@54 -- # run_test denied denied 00:03:29.842 20:59:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.842 20:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.842 20:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:29.842 ************************************ 00:03:29.842 START TEST denied 00:03:29.842 ************************************ 00:03:29.842 20:59:07 -- common/autotest_common.sh@1104 -- # denied 00:03:29.842 20:59:07 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:29.842 20:59:07 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:29.842 20:59:07 -- setup/acl.sh@38 -- # setup output config 00:03:29.842 20:59:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.842 20:59:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.049 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:34.049 20:59:11 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:34.049 20:59:11 -- setup/acl.sh@28 -- # local dev driver 00:03:34.049 20:59:11 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:34.049 20:59:11 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:34.049 20:59:11 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:34.049 20:59:11 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:34.049 20:59:11 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:34.049 20:59:11 -- setup/acl.sh@41 -- # setup reset 00:03:34.049 20:59:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.049 20:59:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.312 00:03:38.312 real 0m8.711s 00:03:38.312 user 0m2.921s 00:03:38.312 sys 0m5.069s 00:03:38.312 20:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.312 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:03:38.312 ************************************ 00:03:38.312 END TEST denied 00:03:38.312 ************************************ 00:03:38.312 20:59:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:38.312 20:59:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.312 20:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.312 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:03:38.312 ************************************ 00:03:38.312 START TEST allowed 00:03:38.312 ************************************ 00:03:38.312 20:59:16 -- common/autotest_common.sh@1104 -- # allowed 00:03:38.312 20:59:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:38.312 20:59:16 -- setup/acl.sh@45 -- # setup output config 00:03:38.312 20:59:16 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:38.312 20:59:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.312 20:59:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.604 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:43.604 20:59:21 -- setup/acl.sh@47 -- # verify 00:03:43.604 20:59:21 -- setup/acl.sh@28 -- # local dev driver 00:03:43.604 20:59:21 -- setup/acl.sh@48 -- # setup reset 00:03:43.604 20:59:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.604 20:59:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.813 00:03:47.813 real 0m9.235s 00:03:47.813 user 0m2.650s 00:03:47.813 sys 0m4.812s 00:03:47.813 20:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.813 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.813 ************************************ 00:03:47.813 END TEST allowed 00:03:47.813 ************************************ 00:03:47.813 00:03:47.813 real 0m24.994s 00:03:47.813 user 0m8.118s 00:03:47.813 sys 0m14.466s 00:03:47.813 20:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.813 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.813 ************************************ 00:03:47.813 END TEST acl 00:03:47.813 ************************************ 00:03:47.813 20:59:25 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.813 20:59:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.813 20:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.813 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.813 ************************************ 00:03:47.813 START TEST hugepages 00:03:47.813 ************************************ 00:03:47.813 20:59:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.813 * Looking for test storage... 00:03:47.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:47.813 20:59:25 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:47.813 20:59:25 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:47.813 20:59:25 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:47.813 20:59:25 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:47.813 20:59:25 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:47.813 20:59:25 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:47.813 20:59:25 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:47.813 20:59:25 -- setup/common.sh@18 -- # local node= 00:03:47.813 20:59:25 -- setup/common.sh@19 -- # local var val 00:03:47.813 20:59:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.813 20:59:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.813 20:59:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.813 20:59:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.813 20:59:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.813 20:59:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.813 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.813 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103285020 kB' 'MemAvailable: 106546344 kB' 'Buffers: 2704 kB' 'Cached: 14291644 kB' 'SwapCached: 0 kB' 'Active: 11329316 kB' 'Inactive: 3514596 kB' 'Active(anon): 10917288 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553056 kB' 'Mapped: 176840 kB' 'Shmem: 10367724 kB' 'KReclaimable: 318060 kB' 'Slab: 1175600 kB' 'SReclaimable: 318060 kB' 'SUnreclaim: 857540 kB' 'KernelStack: 27152 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 12408172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.814 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.814 20:59:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # continue 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.815 20:59:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.815 20:59:25 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.815 20:59:25 -- setup/common.sh@33 -- # echo 2048 00:03:47.815 20:59:25 -- setup/common.sh@33 -- # return 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:47.815 20:59:25 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:47.815 20:59:25 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:47.815 20:59:25 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:47.815 20:59:25 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:47.815 20:59:25 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:47.815 20:59:25 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:47.815 20:59:25 -- setup/hugepages.sh@207 -- # get_nodes 00:03:47.815 20:59:25 -- setup/hugepages.sh@27 -- # local node 00:03:47.815 20:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.815 20:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:47.815 20:59:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.815 20:59:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.815 20:59:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.815 20:59:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.815 20:59:25 -- setup/hugepages.sh@208 -- # clear_hp 00:03:47.815 20:59:25 -- setup/hugepages.sh@37 -- # local node hp 00:03:47.815 20:59:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.815 20:59:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.815 20:59:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.815 20:59:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.815 20:59:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.815 20:59:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.815 20:59:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.815 20:59:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.815 20:59:25 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:47.815 20:59:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.815 20:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.815 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.815 ************************************ 00:03:47.815 START TEST default_setup 00:03:47.815 ************************************ 00:03:47.815 20:59:25 -- common/autotest_common.sh@1104 -- # default_setup 00:03:47.815 20:59:25 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.815 20:59:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.815 20:59:25 -- setup/hugepages.sh@51 -- # shift 00:03:47.815 20:59:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.815 20:59:25 -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.815 20:59:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.815 20:59:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.815 20:59:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.815 20:59:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.815 20:59:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.815 20:59:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.815 20:59:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.815 20:59:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.815 20:59:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.815 20:59:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.815 20:59:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.815 20:59:25 -- setup/hugepages.sh@73 -- # return 0 00:03:47.815 20:59:25 -- setup/hugepages.sh@137 -- # setup output 00:03:47.815 20:59:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.815 20:59:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.121 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.121 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.381 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:51.642 20:59:29 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:51.642 20:59:29 -- setup/hugepages.sh@89 -- # local node 00:03:51.642 20:59:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.642 20:59:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.642 20:59:29 -- setup/hugepages.sh@92 -- # local surp 00:03:51.642 20:59:29 -- setup/hugepages.sh@93 -- # local resv 00:03:51.642 20:59:29 -- setup/hugepages.sh@94 -- # local anon 00:03:51.642 20:59:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.642 20:59:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.642 20:59:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.642 20:59:29 -- setup/common.sh@18 -- # local node= 00:03:51.642 20:59:29 -- setup/common.sh@19 -- # local var val 00:03:51.642 20:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.642 20:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.642 20:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.642 20:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.642 20:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.642 20:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.642 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.642 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105438880 kB' 'MemAvailable: 108700188 kB' 'Buffers: 2704 kB' 'Cached: 14291772 kB' 'SwapCached: 0 kB' 'Active: 11341344 kB' 'Inactive: 3514596 kB' 'Active(anon): 10929316 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564912 kB' 'Mapped: 177012 kB' 'Shmem: 10367852 kB' 'KReclaimable: 318028 kB' 'Slab: 1173800 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855772 kB' 'KernelStack: 27168 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235156 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.643 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.643 20:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.908 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.908 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.909 20:59:29 -- setup/common.sh@33 -- # echo 0 00:03:51.909 20:59:29 -- setup/common.sh@33 -- # return 0 00:03:51.909 20:59:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.909 20:59:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.909 20:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.909 20:59:29 -- setup/common.sh@18 -- # local node= 00:03:51.909 20:59:29 -- setup/common.sh@19 -- # local var val 00:03:51.909 20:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.909 20:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.909 20:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.909 20:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.909 20:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.909 20:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105439384 kB' 'MemAvailable: 108700692 kB' 'Buffers: 2704 kB' 'Cached: 14291772 kB' 'SwapCached: 0 kB' 'Active: 11342360 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930332 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566084 kB' 'Mapped: 177012 kB' 'Shmem: 10367852 kB' 'KReclaimable: 318028 kB' 'Slab: 1173808 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855780 kB' 'KernelStack: 27200 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12421984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.909 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.909 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.910 20:59:29 -- setup/common.sh@33 -- # echo 0 00:03:51.910 20:59:29 -- setup/common.sh@33 -- # return 0 00:03:51.910 20:59:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.910 20:59:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.910 20:59:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.910 20:59:29 -- setup/common.sh@18 -- # local node= 00:03:51.910 20:59:29 -- setup/common.sh@19 -- # local var val 00:03:51.910 20:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.910 20:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.910 20:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.910 20:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.910 20:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.910 20:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105438348 kB' 'MemAvailable: 108699656 kB' 'Buffers: 2704 kB' 'Cached: 14291784 kB' 'SwapCached: 0 kB' 'Active: 11341540 kB' 'Inactive: 3514596 kB' 'Active(anon): 10929512 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565748 kB' 'Mapped: 176996 kB' 'Shmem: 10367864 kB' 'KReclaimable: 318028 kB' 'Slab: 1173808 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855780 kB' 'KernelStack: 27200 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12423472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235188 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.910 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.910 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.911 20:59:29 -- setup/common.sh@33 -- # echo 0 00:03:51.911 20:59:29 -- setup/common.sh@33 -- # return 0 00:03:51.911 20:59:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.911 20:59:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.911 nr_hugepages=1024 00:03:51.911 20:59:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.911 resv_hugepages=0 00:03:51.911 20:59:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.911 surplus_hugepages=0 00:03:51.911 20:59:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.911 anon_hugepages=0 00:03:51.911 20:59:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.911 20:59:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.911 20:59:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.911 20:59:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.911 20:59:29 -- setup/common.sh@18 -- # local node= 00:03:51.911 20:59:29 -- setup/common.sh@19 -- # local var val 00:03:51.911 20:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.911 20:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.911 20:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.911 20:59:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.911 20:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.911 20:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105439128 kB' 'MemAvailable: 108700436 kB' 'Buffers: 2704 kB' 'Cached: 14291800 kB' 'SwapCached: 0 kB' 'Active: 11341356 kB' 'Inactive: 3514596 kB' 'Active(anon): 10929328 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565012 kB' 'Mapped: 176996 kB' 'Shmem: 10367880 kB' 'KReclaimable: 318028 kB' 'Slab: 1173868 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855840 kB' 'KernelStack: 27264 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12422004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235188 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.911 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.911 20:59:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.912 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.912 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.913 20:59:29 -- setup/common.sh@33 -- # echo 1024 00:03:51.913 20:59:29 -- setup/common.sh@33 -- # return 0 00:03:51.913 20:59:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.913 20:59:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.913 20:59:29 -- setup/hugepages.sh@27 -- # local node 00:03:51.913 20:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.913 20:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.913 20:59:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.913 20:59:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.913 20:59:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.913 20:59:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.913 20:59:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.913 20:59:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.913 20:59:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.913 20:59:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.913 20:59:29 -- setup/common.sh@18 -- # local node=0 00:03:51.913 20:59:29 -- setup/common.sh@19 -- # local var val 00:03:51.913 20:59:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.913 20:59:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.913 20:59:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.913 20:59:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.913 20:59:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.913 20:59:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.913 20:59:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51154416 kB' 'MemUsed: 14504592 kB' 'SwapCached: 0 kB' 'Active: 7018140 kB' 'Inactive: 3323792 kB' 'Active(anon): 6868900 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115260 kB' 'Mapped: 62312 kB' 'AnonPages: 230028 kB' 'Shmem: 6642228 kB' 'KernelStack: 12664 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702404 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.913 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.913 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # continue 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.914 20:59:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.914 20:59:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.914 20:59:29 -- setup/common.sh@33 -- # echo 0 00:03:51.914 20:59:29 -- setup/common.sh@33 -- # return 0 00:03:51.914 20:59:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.914 20:59:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.914 20:59:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.914 20:59:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.914 20:59:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.914 node0=1024 expecting 1024 00:03:51.914 20:59:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.914 00:03:51.914 real 0m4.044s 00:03:51.914 user 0m1.568s 00:03:51.914 sys 0m2.500s 00:03:51.914 20:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.914 20:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:51.914 ************************************ 00:03:51.914 END TEST default_setup 00:03:51.914 ************************************ 00:03:51.914 20:59:29 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:51.914 20:59:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.914 20:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.914 20:59:29 -- common/autotest_common.sh@10 -- # set +x 00:03:51.914 ************************************ 00:03:51.914 START TEST per_node_1G_alloc 00:03:51.914 ************************************ 00:03:51.914 20:59:29 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:51.914 20:59:29 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:51.914 20:59:29 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:51.914 20:59:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.914 20:59:29 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:51.914 20:59:29 -- setup/hugepages.sh@51 -- # shift 00:03:51.914 20:59:29 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:51.914 20:59:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.914 20:59:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.914 20:59:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.914 20:59:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:51.914 20:59:29 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:51.914 20:59:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.914 20:59:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.914 20:59:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.914 20:59:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.914 20:59:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.914 20:59:29 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:51.914 20:59:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.914 20:59:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.914 20:59:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.914 20:59:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.914 20:59:29 -- setup/hugepages.sh@73 -- # return 0 00:03:51.914 20:59:29 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:51.914 20:59:29 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:51.914 20:59:29 -- setup/hugepages.sh@146 -- # setup output 00:03:51.914 20:59:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.914 20:59:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.216 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.216 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.216 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.477 20:59:33 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:55.477 20:59:33 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.477 20:59:33 -- setup/hugepages.sh@89 -- # local node 00:03:55.477 20:59:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.477 20:59:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.477 20:59:33 -- setup/hugepages.sh@92 -- # local surp 00:03:55.477 20:59:33 -- setup/hugepages.sh@93 -- # local resv 00:03:55.477 20:59:33 -- setup/hugepages.sh@94 -- # local anon 00:03:55.477 20:59:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.477 20:59:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.477 20:59:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.477 20:59:33 -- setup/common.sh@18 -- # local node= 00:03:55.477 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.477 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.477 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.477 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.477 20:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.477 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.477 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105449800 kB' 'MemAvailable: 108711108 kB' 'Buffers: 2704 kB' 'Cached: 14291920 kB' 'SwapCached: 0 kB' 'Active: 11338452 kB' 'Inactive: 3514596 kB' 'Active(anon): 10926424 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561220 kB' 'Mapped: 175816 kB' 'Shmem: 10368000 kB' 'KReclaimable: 318028 kB' 'Slab: 1174004 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855976 kB' 'KernelStack: 27120 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12412180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.477 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.477 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.741 20:59:33 -- setup/common.sh@33 -- # echo 0 00:03:55.741 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.741 20:59:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.741 20:59:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.741 20:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.741 20:59:33 -- setup/common.sh@18 -- # local node= 00:03:55.741 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.742 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.742 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.742 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.742 20:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.742 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.742 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105451968 kB' 'MemAvailable: 108713276 kB' 'Buffers: 2704 kB' 'Cached: 14291924 kB' 'SwapCached: 0 kB' 'Active: 11339176 kB' 'Inactive: 3514596 kB' 'Active(anon): 10927148 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562028 kB' 'Mapped: 175816 kB' 'Shmem: 10368004 kB' 'KReclaimable: 318028 kB' 'Slab: 1173968 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855940 kB' 'KernelStack: 27312 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12413840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 20:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.743 20:59:33 -- setup/common.sh@33 -- # echo 0 00:03:55.743 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.743 20:59:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.743 20:59:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.743 20:59:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.743 20:59:33 -- setup/common.sh@18 -- # local node= 00:03:55.743 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.743 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.743 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.743 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.743 20:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.743 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.743 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105453348 kB' 'MemAvailable: 108714656 kB' 'Buffers: 2704 kB' 'Cached: 14291940 kB' 'SwapCached: 0 kB' 'Active: 11339616 kB' 'Inactive: 3514596 kB' 'Active(anon): 10927588 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562976 kB' 'Mapped: 175824 kB' 'Shmem: 10368020 kB' 'KReclaimable: 318028 kB' 'Slab: 1174028 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856000 kB' 'KernelStack: 27280 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12414224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 20:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.744 20:59:33 -- setup/common.sh@33 -- # echo 0 00:03:55.744 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.744 20:59:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.744 20:59:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.744 nr_hugepages=1024 00:03:55.744 20:59:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.744 resv_hugepages=0 00:03:55.744 20:59:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.744 surplus_hugepages=0 00:03:55.744 20:59:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.744 anon_hugepages=0 00:03:55.744 20:59:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.744 20:59:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.744 20:59:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.744 20:59:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.744 20:59:33 -- setup/common.sh@18 -- # local node= 00:03:55.744 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.744 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.744 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.744 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.744 20:59:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.744 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.744 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.744 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105458260 kB' 'MemAvailable: 108719568 kB' 'Buffers: 2704 kB' 'Cached: 14291956 kB' 'SwapCached: 0 kB' 'Active: 11338840 kB' 'Inactive: 3514596 kB' 'Active(anon): 10926812 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562156 kB' 'Mapped: 175816 kB' 'Shmem: 10368036 kB' 'KReclaimable: 318028 kB' 'Slab: 1174028 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856000 kB' 'KernelStack: 27360 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12412596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.745 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.746 20:59:33 -- setup/common.sh@33 -- # echo 1024 00:03:55.746 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.746 20:59:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.746 20:59:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.746 20:59:33 -- setup/hugepages.sh@27 -- # local node 00:03:55.746 20:59:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.746 20:59:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.746 20:59:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.746 20:59:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.746 20:59:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.746 20:59:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.746 20:59:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.746 20:59:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.746 20:59:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.746 20:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.746 20:59:33 -- setup/common.sh@18 -- # local node=0 00:03:55.746 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.746 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.746 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.746 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.746 20:59:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.746 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.746 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52200952 kB' 'MemUsed: 13458056 kB' 'SwapCached: 0 kB' 'Active: 7016468 kB' 'Inactive: 3323792 kB' 'Active(anon): 6867228 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115360 kB' 'Mapped: 61520 kB' 'AnonPages: 228164 kB' 'Shmem: 6642328 kB' 'KernelStack: 12648 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702356 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 20:59:33 -- setup/common.sh@33 -- # echo 0 00:03:55.747 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.747 20:59:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.747 20:59:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.747 20:59:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.747 20:59:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.747 20:59:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.747 20:59:33 -- setup/common.sh@18 -- # local node=1 00:03:55.747 20:59:33 -- setup/common.sh@19 -- # local var val 00:03:55.747 20:59:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.748 20:59:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.748 20:59:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.748 20:59:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.748 20:59:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.748 20:59:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53260416 kB' 'MemUsed: 7419424 kB' 'SwapCached: 0 kB' 'Active: 4322088 kB' 'Inactive: 190804 kB' 'Active(anon): 4059300 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 190804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4179320 kB' 'Mapped: 114296 kB' 'AnonPages: 333640 kB' 'Shmem: 3725728 kB' 'KernelStack: 14600 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132600 kB' 'Slab: 471672 kB' 'SReclaimable: 132600 kB' 'SUnreclaim: 339072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # continue 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 20:59:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 20:59:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 20:59:33 -- setup/common.sh@33 -- # echo 0 00:03:55.749 20:59:33 -- setup/common.sh@33 -- # return 0 00:03:55.749 20:59:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.749 20:59:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.749 20:59:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.749 20:59:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.749 node0=512 expecting 512 00:03:55.749 20:59:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.749 20:59:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.749 20:59:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.749 20:59:33 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:55.749 node1=512 expecting 512 00:03:55.749 20:59:33 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.749 00:03:55.749 real 0m3.809s 00:03:55.749 user 0m1.481s 00:03:55.749 sys 0m2.377s 00:03:55.749 20:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.749 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.749 ************************************ 00:03:55.749 END TEST per_node_1G_alloc 00:03:55.749 ************************************ 00:03:55.749 20:59:33 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:55.749 20:59:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.749 20:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.749 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.749 ************************************ 00:03:55.749 START TEST even_2G_alloc 00:03:55.749 ************************************ 00:03:55.749 20:59:33 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:55.749 20:59:33 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:55.749 20:59:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.749 20:59:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.749 20:59:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.749 20:59:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.749 20:59:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.749 20:59:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.749 20:59:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.749 20:59:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.749 20:59:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.749 20:59:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.749 20:59:33 -- setup/hugepages.sh@83 -- # : 512 00:03:55.749 20:59:33 -- setup/hugepages.sh@84 -- # : 1 00:03:55.749 20:59:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.749 20:59:33 -- setup/hugepages.sh@83 -- # : 0 00:03:55.749 20:59:33 -- setup/hugepages.sh@84 -- # : 0 00:03:55.749 20:59:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.749 20:59:33 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:55.749 20:59:33 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:55.749 20:59:33 -- setup/hugepages.sh@153 -- # setup output 00:03:55.749 20:59:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.749 20:59:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.050 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.050 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.050 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.627 20:59:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.627 20:59:37 -- setup/hugepages.sh@89 -- # local node 00:03:59.627 20:59:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.627 20:59:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.627 20:59:37 -- setup/hugepages.sh@92 -- # local surp 00:03:59.627 20:59:37 -- setup/hugepages.sh@93 -- # local resv 00:03:59.627 20:59:37 -- setup/hugepages.sh@94 -- # local anon 00:03:59.627 20:59:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.627 20:59:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.627 20:59:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.627 20:59:37 -- setup/common.sh@18 -- # local node= 00:03:59.627 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.627 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.627 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.627 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.627 20:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.627 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.627 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.627 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105467772 kB' 'MemAvailable: 108729080 kB' 'Buffers: 2704 kB' 'Cached: 14292072 kB' 'SwapCached: 0 kB' 'Active: 11339528 kB' 'Inactive: 3514596 kB' 'Active(anon): 10927500 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562244 kB' 'Mapped: 175920 kB' 'Shmem: 10368152 kB' 'KReclaimable: 318028 kB' 'Slab: 1174412 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856384 kB' 'KernelStack: 27168 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12410196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 20:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.629 20:59:37 -- setup/common.sh@33 -- # echo 0 00:03:59.629 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.629 20:59:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.629 20:59:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.629 20:59:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.629 20:59:37 -- setup/common.sh@18 -- # local node= 00:03:59.629 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.629 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.629 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.629 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.629 20:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.629 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.629 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105468568 kB' 'MemAvailable: 108729876 kB' 'Buffers: 2704 kB' 'Cached: 14292076 kB' 'SwapCached: 0 kB' 'Active: 11338600 kB' 'Inactive: 3514596 kB' 'Active(anon): 10926572 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561688 kB' 'Mapped: 175824 kB' 'Shmem: 10368156 kB' 'KReclaimable: 318028 kB' 'Slab: 1174380 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856352 kB' 'KernelStack: 27120 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12410208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.629 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.630 20:59:37 -- setup/common.sh@33 -- # echo 0 00:03:59.630 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.630 20:59:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.630 20:59:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.630 20:59:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.630 20:59:37 -- setup/common.sh@18 -- # local node= 00:03:59.630 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.630 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.630 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.630 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.630 20:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.630 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.630 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105468568 kB' 'MemAvailable: 108729876 kB' 'Buffers: 2704 kB' 'Cached: 14292076 kB' 'SwapCached: 0 kB' 'Active: 11339160 kB' 'Inactive: 3514596 kB' 'Active(anon): 10927132 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562228 kB' 'Mapped: 175824 kB' 'Shmem: 10368156 kB' 'KReclaimable: 318028 kB' 'Slab: 1174380 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856352 kB' 'KernelStack: 27136 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12410224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 20:59:37 -- setup/common.sh@33 -- # echo 0 00:03:59.632 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.632 20:59:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.632 20:59:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.632 nr_hugepages=1024 00:03:59.632 20:59:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.632 resv_hugepages=0 00:03:59.632 20:59:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.632 surplus_hugepages=0 00:03:59.632 20:59:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.632 anon_hugepages=0 00:03:59.632 20:59:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.632 20:59:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.632 20:59:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.632 20:59:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.632 20:59:37 -- setup/common.sh@18 -- # local node= 00:03:59.632 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.632 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.632 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.632 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.632 20:59:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.632 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.632 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105469032 kB' 'MemAvailable: 108730340 kB' 'Buffers: 2704 kB' 'Cached: 14292080 kB' 'SwapCached: 0 kB' 'Active: 11338792 kB' 'Inactive: 3514596 kB' 'Active(anon): 10926764 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561856 kB' 'Mapped: 175824 kB' 'Shmem: 10368160 kB' 'KReclaimable: 318028 kB' 'Slab: 1174380 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856352 kB' 'KernelStack: 27120 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12410236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.632 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 20:59:37 -- setup/common.sh@33 -- # echo 1024 00:03:59.633 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.633 20:59:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.633 20:59:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.633 20:59:37 -- setup/hugepages.sh@27 -- # local node 00:03:59.633 20:59:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.633 20:59:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.633 20:59:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.633 20:59:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.633 20:59:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.633 20:59:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.633 20:59:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.633 20:59:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.633 20:59:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.633 20:59:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.633 20:59:37 -- setup/common.sh@18 -- # local node=0 00:03:59.633 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.633 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.633 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.633 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.633 20:59:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.633 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.633 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52204016 kB' 'MemUsed: 13454992 kB' 'SwapCached: 0 kB' 'Active: 7017196 kB' 'Inactive: 3323792 kB' 'Active(anon): 6867956 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115508 kB' 'Mapped: 61516 kB' 'AnonPages: 228704 kB' 'Shmem: 6642476 kB' 'KernelStack: 12648 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702460 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 517032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.633 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@33 -- # echo 0 00:03:59.634 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.634 20:59:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.634 20:59:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.634 20:59:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.634 20:59:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.634 20:59:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.634 20:59:37 -- setup/common.sh@18 -- # local node=1 00:03:59.634 20:59:37 -- setup/common.sh@19 -- # local var val 00:03:59.634 20:59:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.634 20:59:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.634 20:59:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.634 20:59:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.634 20:59:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.634 20:59:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53265104 kB' 'MemUsed: 7414736 kB' 'SwapCached: 0 kB' 'Active: 4321304 kB' 'Inactive: 190804 kB' 'Active(anon): 4058516 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 190804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4179324 kB' 'Mapped: 114308 kB' 'AnonPages: 332808 kB' 'Shmem: 3725732 kB' 'KernelStack: 14472 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132600 kB' 'Slab: 471920 kB' 'SReclaimable: 132600 kB' 'SUnreclaim: 339320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.634 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # continue 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 20:59:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 20:59:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 20:59:37 -- setup/common.sh@33 -- # echo 0 00:03:59.635 20:59:37 -- setup/common.sh@33 -- # return 0 00:03:59.635 20:59:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.635 20:59:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.635 20:59:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.635 20:59:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.635 20:59:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.635 node0=512 expecting 512 00:03:59.635 20:59:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.635 20:59:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.635 20:59:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.635 20:59:37 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.635 node1=512 expecting 512 00:03:59.635 20:59:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.635 00:03:59.635 real 0m3.826s 00:03:59.635 user 0m1.479s 00:03:59.635 sys 0m2.397s 00:03:59.635 20:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.635 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 END TEST even_2G_alloc 00:03:59.635 ************************************ 00:03:59.635 20:59:37 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:59.635 20:59:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.635 20:59:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.635 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 START TEST odd_alloc 00:03:59.635 ************************************ 00:03:59.635 20:59:37 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:59.635 20:59:37 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:59.635 20:59:37 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:59.635 20:59:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.635 20:59:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.635 20:59:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:59.635 20:59:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.635 20:59:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.635 20:59:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.635 20:59:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:59.635 20:59:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.636 20:59:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.636 20:59:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.636 20:59:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.636 20:59:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.636 20:59:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.636 20:59:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.636 20:59:37 -- setup/hugepages.sh@83 -- # : 513 00:03:59.636 20:59:37 -- setup/hugepages.sh@84 -- # : 1 00:03:59.636 20:59:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.636 20:59:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:59.636 20:59:37 -- setup/hugepages.sh@83 -- # : 0 00:03:59.636 20:59:37 -- setup/hugepages.sh@84 -- # : 0 00:03:59.636 20:59:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.636 20:59:37 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:59.636 20:59:37 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:59.636 20:59:37 -- setup/hugepages.sh@160 -- # setup output 00:03:59.636 20:59:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.636 20:59:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.993 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.993 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.993 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.259 20:59:41 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.259 20:59:41 -- setup/hugepages.sh@89 -- # local node 00:04:03.259 20:59:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.259 20:59:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.259 20:59:41 -- setup/hugepages.sh@92 -- # local surp 00:04:03.259 20:59:41 -- setup/hugepages.sh@93 -- # local resv 00:04:03.259 20:59:41 -- setup/hugepages.sh@94 -- # local anon 00:04:03.259 20:59:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.259 20:59:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.259 20:59:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.259 20:59:41 -- setup/common.sh@18 -- # local node= 00:04:03.259 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.259 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.259 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.259 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.259 20:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.259 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.259 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105472632 kB' 'MemAvailable: 108733940 kB' 'Buffers: 2704 kB' 'Cached: 14292216 kB' 'SwapCached: 0 kB' 'Active: 11346484 kB' 'Inactive: 3514596 kB' 'Active(anon): 10934456 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569540 kB' 'Mapped: 176700 kB' 'Shmem: 10368296 kB' 'KReclaimable: 318028 kB' 'Slab: 1173620 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855592 kB' 'KernelStack: 27184 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12417112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235352 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.259 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.260 20:59:41 -- setup/common.sh@33 -- # echo 0 00:04:03.260 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.260 20:59:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.260 20:59:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.260 20:59:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.260 20:59:41 -- setup/common.sh@18 -- # local node= 00:04:03.260 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.260 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.260 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.260 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.260 20:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.260 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.260 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105474940 kB' 'MemAvailable: 108736248 kB' 'Buffers: 2704 kB' 'Cached: 14292220 kB' 'SwapCached: 0 kB' 'Active: 11340156 kB' 'Inactive: 3514596 kB' 'Active(anon): 10928128 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563152 kB' 'Mapped: 175844 kB' 'Shmem: 10368300 kB' 'KReclaimable: 318028 kB' 'Slab: 1173568 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855540 kB' 'KernelStack: 27136 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12411004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 20:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.262 20:59:41 -- setup/common.sh@33 -- # echo 0 00:04:03.262 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.262 20:59:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.262 20:59:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.262 20:59:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.262 20:59:41 -- setup/common.sh@18 -- # local node= 00:04:03.262 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.262 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.262 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.262 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.262 20:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.262 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.262 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105475848 kB' 'MemAvailable: 108737156 kB' 'Buffers: 2704 kB' 'Cached: 14292232 kB' 'SwapCached: 0 kB' 'Active: 11340164 kB' 'Inactive: 3514596 kB' 'Active(anon): 10928136 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563156 kB' 'Mapped: 175844 kB' 'Shmem: 10368312 kB' 'KReclaimable: 318028 kB' 'Slab: 1173568 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855540 kB' 'KernelStack: 27136 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12411020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.262 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.262 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.263 20:59:41 -- setup/common.sh@33 -- # echo 0 00:04:03.263 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.263 20:59:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.263 20:59:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.263 nr_hugepages=1025 00:04:03.263 20:59:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.263 resv_hugepages=0 00:04:03.263 20:59:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.263 surplus_hugepages=0 00:04:03.263 20:59:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.263 anon_hugepages=0 00:04:03.263 20:59:41 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.264 20:59:41 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.264 20:59:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.264 20:59:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.264 20:59:41 -- setup/common.sh@18 -- # local node= 00:04:03.264 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.264 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.264 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.264 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.264 20:59:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.264 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.264 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105475848 kB' 'MemAvailable: 108737156 kB' 'Buffers: 2704 kB' 'Cached: 14292256 kB' 'SwapCached: 0 kB' 'Active: 11339848 kB' 'Inactive: 3514596 kB' 'Active(anon): 10927820 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562764 kB' 'Mapped: 175844 kB' 'Shmem: 10368336 kB' 'KReclaimable: 318028 kB' 'Slab: 1173568 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855540 kB' 'KernelStack: 27120 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12410672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.265 20:59:41 -- setup/common.sh@33 -- # echo 1025 00:04:03.265 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.265 20:59:41 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.265 20:59:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.265 20:59:41 -- setup/hugepages.sh@27 -- # local node 00:04:03.265 20:59:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.265 20:59:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.265 20:59:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.265 20:59:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:03.265 20:59:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.265 20:59:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.265 20:59:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.265 20:59:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.265 20:59:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.265 20:59:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.265 20:59:41 -- setup/common.sh@18 -- # local node=0 00:04:03.265 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.265 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.265 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.265 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.265 20:59:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.265 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.265 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52210708 kB' 'MemUsed: 13448300 kB' 'SwapCached: 0 kB' 'Active: 7018244 kB' 'Inactive: 3323792 kB' 'Active(anon): 6869004 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115644 kB' 'Mapped: 61520 kB' 'AnonPages: 229636 kB' 'Shmem: 6642612 kB' 'KernelStack: 12632 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702120 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.266 20:59:41 -- setup/common.sh@33 -- # echo 0 00:04:03.266 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.266 20:59:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.266 20:59:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.266 20:59:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.266 20:59:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.266 20:59:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.266 20:59:41 -- setup/common.sh@18 -- # local node=1 00:04:03.266 20:59:41 -- setup/common.sh@19 -- # local var val 00:04:03.266 20:59:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.266 20:59:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.266 20:59:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.266 20:59:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.266 20:59:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.266 20:59:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53265528 kB' 'MemUsed: 7414312 kB' 'SwapCached: 0 kB' 'Active: 4321528 kB' 'Inactive: 190804 kB' 'Active(anon): 4058740 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 190804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4179332 kB' 'Mapped: 114324 kB' 'AnonPages: 333128 kB' 'Shmem: 3725740 kB' 'KernelStack: 14488 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132600 kB' 'Slab: 471448 kB' 'SReclaimable: 132600 kB' 'SUnreclaim: 338848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.268 20:59:41 -- setup/common.sh@32 -- # continue 00:04:03.268 20:59:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 20:59:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 20:59:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.268 20:59:41 -- setup/common.sh@33 -- # echo 0 00:04:03.268 20:59:41 -- setup/common.sh@33 -- # return 0 00:04:03.268 20:59:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.268 20:59:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.268 20:59:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.268 20:59:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.268 20:59:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:03.268 node0=512 expecting 513 00:04:03.268 20:59:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.268 20:59:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.268 20:59:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.268 20:59:41 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:03.268 node1=513 expecting 512 00:04:03.268 20:59:41 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.268 00:04:03.268 real 0m3.676s 00:04:03.268 user 0m1.439s 00:04:03.268 sys 0m2.270s 00:04:03.268 20:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.268 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:03.268 ************************************ 00:04:03.268 END TEST odd_alloc 00:04:03.268 ************************************ 00:04:03.529 20:59:41 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:03.529 20:59:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.529 20:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.529 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:03.529 ************************************ 00:04:03.529 START TEST custom_alloc 00:04:03.529 ************************************ 00:04:03.529 20:59:41 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:03.529 20:59:41 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:03.529 20:59:41 -- setup/hugepages.sh@169 -- # local node 00:04:03.529 20:59:41 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:03.529 20:59:41 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:03.529 20:59:41 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:03.529 20:59:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.529 20:59:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.529 20:59:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.529 20:59:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.529 20:59:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.529 20:59:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.529 20:59:41 -- setup/hugepages.sh@83 -- # : 256 00:04:03.529 20:59:41 -- setup/hugepages.sh@84 -- # : 1 00:04:03.529 20:59:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.529 20:59:41 -- setup/hugepages.sh@83 -- # : 0 00:04:03.529 20:59:41 -- setup/hugepages.sh@84 -- # : 0 00:04:03.529 20:59:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:03.529 20:59:41 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:03.529 20:59:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.529 20:59:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.529 20:59:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.529 20:59:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.529 20:59:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.529 20:59:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.529 20:59:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.529 20:59:41 -- setup/hugepages.sh@78 -- # return 0 00:04:03.529 20:59:41 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:03.529 20:59:41 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.529 20:59:41 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.529 20:59:41 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.529 20:59:41 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.529 20:59:41 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.529 20:59:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.529 20:59:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.529 20:59:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.529 20:59:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:03.529 20:59:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.529 20:59:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.529 20:59:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.529 20:59:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:03.529 20:59:41 -- setup/hugepages.sh@78 -- # return 0 00:04:03.529 20:59:41 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:03.529 20:59:41 -- setup/hugepages.sh@187 -- # setup output 00:04:03.529 20:59:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.529 20:59:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.833 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.833 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.833 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.098 20:59:45 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:07.098 20:59:45 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:07.098 20:59:45 -- setup/hugepages.sh@89 -- # local node 00:04:07.098 20:59:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.098 20:59:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.098 20:59:45 -- setup/hugepages.sh@92 -- # local surp 00:04:07.098 20:59:45 -- setup/hugepages.sh@93 -- # local resv 00:04:07.098 20:59:45 -- setup/hugepages.sh@94 -- # local anon 00:04:07.098 20:59:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.098 20:59:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.098 20:59:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.098 20:59:45 -- setup/common.sh@18 -- # local node= 00:04:07.098 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.098 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.098 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.098 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.098 20:59:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.098 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.098 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104449584 kB' 'MemAvailable: 107710892 kB' 'Buffers: 2704 kB' 'Cached: 14292364 kB' 'SwapCached: 0 kB' 'Active: 11341980 kB' 'Inactive: 3514596 kB' 'Active(anon): 10929952 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564492 kB' 'Mapped: 175964 kB' 'Shmem: 10368444 kB' 'KReclaimable: 318028 kB' 'Slab: 1173548 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855520 kB' 'KernelStack: 27168 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12411788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.098 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.098 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.099 20:59:45 -- setup/common.sh@33 -- # echo 0 00:04:07.099 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.099 20:59:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.099 20:59:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.099 20:59:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.099 20:59:45 -- setup/common.sh@18 -- # local node= 00:04:07.099 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.099 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.099 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.099 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.099 20:59:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.099 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.099 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104449912 kB' 'MemAvailable: 107711220 kB' 'Buffers: 2704 kB' 'Cached: 14292368 kB' 'SwapCached: 0 kB' 'Active: 11340936 kB' 'Inactive: 3514596 kB' 'Active(anon): 10928908 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563836 kB' 'Mapped: 175872 kB' 'Shmem: 10368448 kB' 'KReclaimable: 318028 kB' 'Slab: 1173524 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855496 kB' 'KernelStack: 27136 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12411800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.099 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.099 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.100 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.100 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.100 20:59:45 -- setup/common.sh@33 -- # echo 0 00:04:07.100 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.101 20:59:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.101 20:59:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.101 20:59:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.101 20:59:45 -- setup/common.sh@18 -- # local node= 00:04:07.101 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.101 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.101 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.101 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.101 20:59:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.101 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.101 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.101 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104451096 kB' 'MemAvailable: 107712404 kB' 'Buffers: 2704 kB' 'Cached: 14292380 kB' 'SwapCached: 0 kB' 'Active: 11340952 kB' 'Inactive: 3514596 kB' 'Active(anon): 10928924 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563832 kB' 'Mapped: 175872 kB' 'Shmem: 10368460 kB' 'KReclaimable: 318028 kB' 'Slab: 1173524 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855496 kB' 'KernelStack: 27136 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12411816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.101 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.101 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.102 20:59:45 -- setup/common.sh@33 -- # echo 0 00:04:07.102 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.102 20:59:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.102 20:59:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:07.102 nr_hugepages=1536 00:04:07.102 20:59:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.102 resv_hugepages=0 00:04:07.102 20:59:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.102 surplus_hugepages=0 00:04:07.102 20:59:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.102 anon_hugepages=0 00:04:07.102 20:59:45 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.102 20:59:45 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:07.102 20:59:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.102 20:59:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.102 20:59:45 -- setup/common.sh@18 -- # local node= 00:04:07.102 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.102 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.102 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.102 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.102 20:59:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.102 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.102 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104451228 kB' 'MemAvailable: 107712536 kB' 'Buffers: 2704 kB' 'Cached: 14292392 kB' 'SwapCached: 0 kB' 'Active: 11341280 kB' 'Inactive: 3514596 kB' 'Active(anon): 10929252 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564296 kB' 'Mapped: 175868 kB' 'Shmem: 10368472 kB' 'KReclaimable: 318028 kB' 'Slab: 1173524 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855496 kB' 'KernelStack: 27152 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12417796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.102 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.102 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.103 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.103 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.104 20:59:45 -- setup/common.sh@33 -- # echo 1536 00:04:07.104 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.104 20:59:45 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.104 20:59:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.104 20:59:45 -- setup/hugepages.sh@27 -- # local node 00:04:07.104 20:59:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.104 20:59:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.104 20:59:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.104 20:59:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.104 20:59:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.104 20:59:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.104 20:59:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.104 20:59:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.104 20:59:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.104 20:59:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.104 20:59:45 -- setup/common.sh@18 -- # local node=0 00:04:07.104 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.104 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.104 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.104 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.104 20:59:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.104 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.104 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52212728 kB' 'MemUsed: 13446280 kB' 'SwapCached: 0 kB' 'Active: 7018792 kB' 'Inactive: 3323792 kB' 'Active(anon): 6869552 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115736 kB' 'Mapped: 61520 kB' 'AnonPages: 230068 kB' 'Shmem: 6642704 kB' 'KernelStack: 12648 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702024 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.104 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.104 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.105 20:59:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.105 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.105 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.105 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.105 20:59:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.105 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.105 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.105 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@33 -- # echo 0 00:04:07.367 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.367 20:59:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.367 20:59:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.367 20:59:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.367 20:59:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.367 20:59:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.367 20:59:45 -- setup/common.sh@18 -- # local node=1 00:04:07.367 20:59:45 -- setup/common.sh@19 -- # local var val 00:04:07.367 20:59:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.367 20:59:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.367 20:59:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.367 20:59:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.367 20:59:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.367 20:59:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 52242248 kB' 'MemUsed: 8437592 kB' 'SwapCached: 0 kB' 'Active: 4322728 kB' 'Inactive: 190804 kB' 'Active(anon): 4059940 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 190804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4179376 kB' 'Mapped: 114348 kB' 'AnonPages: 334420 kB' 'Shmem: 3725784 kB' 'KernelStack: 14616 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132600 kB' 'Slab: 471500 kB' 'SReclaimable: 132600 kB' 'SUnreclaim: 338900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.367 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.367 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # continue 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.368 20:59:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.368 20:59:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.368 20:59:45 -- setup/common.sh@33 -- # echo 0 00:04:07.368 20:59:45 -- setup/common.sh@33 -- # return 0 00:04:07.368 20:59:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.368 20:59:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.368 20:59:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.368 20:59:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.368 20:59:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.368 node0=512 expecting 512 00:04:07.368 20:59:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.368 20:59:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.368 20:59:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.368 20:59:45 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:07.368 node1=1024 expecting 1024 00:04:07.368 20:59:45 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:07.368 00:04:07.368 real 0m3.847s 00:04:07.368 user 0m1.520s 00:04:07.368 sys 0m2.384s 00:04:07.368 20:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.368 20:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:07.368 ************************************ 00:04:07.368 END TEST custom_alloc 00:04:07.368 ************************************ 00:04:07.368 20:59:45 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:07.368 20:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.368 20:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.368 20:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:07.368 ************************************ 00:04:07.368 START TEST no_shrink_alloc 00:04:07.368 ************************************ 00:04:07.368 20:59:45 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:07.368 20:59:45 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:07.368 20:59:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.368 20:59:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.368 20:59:45 -- setup/hugepages.sh@51 -- # shift 00:04:07.368 20:59:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.368 20:59:45 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.368 20:59:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.368 20:59:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.368 20:59:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.368 20:59:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.368 20:59:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.368 20:59:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.368 20:59:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.368 20:59:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.368 20:59:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.368 20:59:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.368 20:59:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.368 20:59:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.368 20:59:45 -- setup/hugepages.sh@73 -- # return 0 00:04:07.368 20:59:45 -- setup/hugepages.sh@198 -- # setup output 00:04:07.368 20:59:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.368 20:59:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.672 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.672 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.672 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.936 20:59:48 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:10.936 20:59:48 -- setup/hugepages.sh@89 -- # local node 00:04:10.936 20:59:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.936 20:59:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.936 20:59:48 -- setup/hugepages.sh@92 -- # local surp 00:04:10.936 20:59:48 -- setup/hugepages.sh@93 -- # local resv 00:04:10.936 20:59:48 -- setup/hugepages.sh@94 -- # local anon 00:04:10.936 20:59:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.936 20:59:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.936 20:59:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.936 20:59:48 -- setup/common.sh@18 -- # local node= 00:04:10.936 20:59:48 -- setup/common.sh@19 -- # local var val 00:04:10.936 20:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.936 20:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.936 20:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.936 20:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.936 20:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.936 20:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105505212 kB' 'MemAvailable: 108766520 kB' 'Buffers: 2704 kB' 'Cached: 14292516 kB' 'SwapCached: 0 kB' 'Active: 11343212 kB' 'Inactive: 3514596 kB' 'Active(anon): 10931184 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566012 kB' 'Mapped: 175908 kB' 'Shmem: 10368596 kB' 'KReclaimable: 318028 kB' 'Slab: 1174036 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856008 kB' 'KernelStack: 27392 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12417676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.936 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.936 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.937 20:59:48 -- setup/common.sh@33 -- # echo 0 00:04:10.937 20:59:48 -- setup/common.sh@33 -- # return 0 00:04:10.937 20:59:48 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.937 20:59:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.937 20:59:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.937 20:59:48 -- setup/common.sh@18 -- # local node= 00:04:10.937 20:59:48 -- setup/common.sh@19 -- # local var val 00:04:10.937 20:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.937 20:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.937 20:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.937 20:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.937 20:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.937 20:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105508488 kB' 'MemAvailable: 108769796 kB' 'Buffers: 2704 kB' 'Cached: 14292520 kB' 'SwapCached: 0 kB' 'Active: 11343264 kB' 'Inactive: 3514596 kB' 'Active(anon): 10931236 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566024 kB' 'Mapped: 175908 kB' 'Shmem: 10368600 kB' 'KReclaimable: 318028 kB' 'Slab: 1174004 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855976 kB' 'KernelStack: 27136 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12416044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.937 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 20:59:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 20:59:48 -- setup/common.sh@33 -- # echo 0 00:04:10.938 20:59:48 -- setup/common.sh@33 -- # return 0 00:04:10.938 20:59:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.938 20:59:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.938 20:59:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.938 20:59:48 -- setup/common.sh@18 -- # local node= 00:04:10.938 20:59:48 -- setup/common.sh@19 -- # local var val 00:04:10.938 20:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.938 20:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.938 20:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.938 20:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.938 20:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.938 20:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105506636 kB' 'MemAvailable: 108767944 kB' 'Buffers: 2704 kB' 'Cached: 14292532 kB' 'SwapCached: 0 kB' 'Active: 11342784 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930756 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565432 kB' 'Mapped: 175900 kB' 'Shmem: 10368612 kB' 'KReclaimable: 318028 kB' 'Slab: 1174084 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856056 kB' 'KernelStack: 27248 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12417704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.939 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.939 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # continue 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.940 20:59:49 -- setup/common.sh@33 -- # echo 0 00:04:10.940 20:59:49 -- setup/common.sh@33 -- # return 0 00:04:10.940 20:59:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.940 20:59:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.940 nr_hugepages=1024 00:04:10.940 20:59:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.940 resv_hugepages=0 00:04:10.940 20:59:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.940 surplus_hugepages=0 00:04:10.940 20:59:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.940 anon_hugepages=0 00:04:10.940 20:59:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.940 20:59:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.940 20:59:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.940 20:59:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.940 20:59:49 -- setup/common.sh@18 -- # local node= 00:04:10.940 20:59:49 -- setup/common.sh@19 -- # local var val 00:04:10.940 20:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.940 20:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.940 20:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.940 20:59:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.940 20:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.940 20:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105506588 kB' 'MemAvailable: 108767896 kB' 'Buffers: 2704 kB' 'Cached: 14292544 kB' 'SwapCached: 0 kB' 'Active: 11342612 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930584 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565268 kB' 'Mapped: 175900 kB' 'Shmem: 10368624 kB' 'KReclaimable: 318028 kB' 'Slab: 1174084 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 856056 kB' 'KernelStack: 27360 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12417720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.203 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.203 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.204 20:59:49 -- setup/common.sh@33 -- # echo 1024 00:04:11.204 20:59:49 -- setup/common.sh@33 -- # return 0 00:04:11.204 20:59:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.204 20:59:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.204 20:59:49 -- setup/hugepages.sh@27 -- # local node 00:04:11.204 20:59:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.204 20:59:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.204 20:59:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.204 20:59:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.204 20:59:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.204 20:59:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.204 20:59:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.204 20:59:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.204 20:59:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.204 20:59:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.204 20:59:49 -- setup/common.sh@18 -- # local node=0 00:04:11.204 20:59:49 -- setup/common.sh@19 -- # local var val 00:04:11.204 20:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.204 20:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.204 20:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.204 20:59:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.204 20:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.204 20:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51168576 kB' 'MemUsed: 14490432 kB' 'SwapCached: 0 kB' 'Active: 7020900 kB' 'Inactive: 3323792 kB' 'Active(anon): 6871660 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115880 kB' 'Mapped: 61528 kB' 'AnonPages: 232100 kB' 'Shmem: 6642848 kB' 'KernelStack: 12696 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702176 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.204 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # continue 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 20:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 20:59:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 20:59:49 -- setup/common.sh@33 -- # echo 0 00:04:11.205 20:59:49 -- setup/common.sh@33 -- # return 0 00:04:11.205 20:59:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.205 20:59:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.205 20:59:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.205 20:59:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.205 20:59:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.205 node0=1024 expecting 1024 00:04:11.205 20:59:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.205 20:59:49 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.205 20:59:49 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.205 20:59:49 -- setup/hugepages.sh@202 -- # setup output 00:04:11.205 20:59:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.205 20:59:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.507 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:14.507 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.507 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.771 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:14.771 20:59:52 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:14.771 20:59:52 -- setup/hugepages.sh@89 -- # local node 00:04:14.771 20:59:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.771 20:59:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.771 20:59:52 -- setup/hugepages.sh@92 -- # local surp 00:04:14.771 20:59:52 -- setup/hugepages.sh@93 -- # local resv 00:04:14.771 20:59:52 -- setup/hugepages.sh@94 -- # local anon 00:04:14.771 20:59:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.771 20:59:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.771 20:59:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.771 20:59:52 -- setup/common.sh@18 -- # local node= 00:04:14.771 20:59:52 -- setup/common.sh@19 -- # local var val 00:04:14.771 20:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.771 20:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.771 20:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.771 20:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.771 20:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.771 20:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.771 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105544264 kB' 'MemAvailable: 108805572 kB' 'Buffers: 2704 kB' 'Cached: 14292644 kB' 'SwapCached: 0 kB' 'Active: 11343564 kB' 'Inactive: 3514596 kB' 'Active(anon): 10931536 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565636 kB' 'Mapped: 176164 kB' 'Shmem: 10368724 kB' 'KReclaimable: 318028 kB' 'Slab: 1173528 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855500 kB' 'KernelStack: 27120 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12413688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.772 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.773 20:59:52 -- setup/common.sh@33 -- # echo 0 00:04:14.773 20:59:52 -- setup/common.sh@33 -- # return 0 00:04:14.773 20:59:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:14.773 20:59:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.773 20:59:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.773 20:59:52 -- setup/common.sh@18 -- # local node= 00:04:14.773 20:59:52 -- setup/common.sh@19 -- # local var val 00:04:14.773 20:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.773 20:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.773 20:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.773 20:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.773 20:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.773 20:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105545284 kB' 'MemAvailable: 108806592 kB' 'Buffers: 2704 kB' 'Cached: 14292648 kB' 'SwapCached: 0 kB' 'Active: 11342616 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930588 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565148 kB' 'Mapped: 176080 kB' 'Shmem: 10368728 kB' 'KReclaimable: 318028 kB' 'Slab: 1173368 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855340 kB' 'KernelStack: 27184 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12413700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 20:59:52 -- setup/common.sh@33 -- # echo 0 00:04:14.774 20:59:52 -- setup/common.sh@33 -- # return 0 00:04:14.774 20:59:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:14.774 20:59:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.774 20:59:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.774 20:59:52 -- setup/common.sh@18 -- # local node= 00:04:14.774 20:59:52 -- setup/common.sh@19 -- # local var val 00:04:14.774 20:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.774 20:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.774 20:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.774 20:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.774 20:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.774 20:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105545376 kB' 'MemAvailable: 108806684 kB' 'Buffers: 2704 kB' 'Cached: 14292648 kB' 'SwapCached: 0 kB' 'Active: 11342400 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930372 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564888 kB' 'Mapped: 175932 kB' 'Shmem: 10368728 kB' 'KReclaimable: 318028 kB' 'Slab: 1173488 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855460 kB' 'KernelStack: 27168 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12413716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 20:59:52 -- setup/common.sh@33 -- # echo 0 00:04:14.775 20:59:52 -- setup/common.sh@33 -- # return 0 00:04:14.775 20:59:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:14.775 20:59:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.775 nr_hugepages=1024 00:04:14.775 20:59:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.775 resv_hugepages=0 00:04:14.775 20:59:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.775 surplus_hugepages=0 00:04:14.775 20:59:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.775 anon_hugepages=0 00:04:14.775 20:59:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.775 20:59:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.775 20:59:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.775 20:59:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.775 20:59:52 -- setup/common.sh@18 -- # local node= 00:04:14.775 20:59:52 -- setup/common.sh@19 -- # local var val 00:04:14.776 20:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.776 20:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.776 20:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.776 20:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.776 20:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.776 20:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105545328 kB' 'MemAvailable: 108806636 kB' 'Buffers: 2704 kB' 'Cached: 14292672 kB' 'SwapCached: 0 kB' 'Active: 11342660 kB' 'Inactive: 3514596 kB' 'Active(anon): 10930632 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565124 kB' 'Mapped: 175908 kB' 'Shmem: 10368752 kB' 'KReclaimable: 318028 kB' 'Slab: 1173488 kB' 'SReclaimable: 318028 kB' 'SUnreclaim: 855460 kB' 'KernelStack: 27184 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12413364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 120384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4447604 kB' 'DirectMap2M: 29835264 kB' 'DirectMap1G: 101711872 kB' 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.776 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.777 20:59:52 -- setup/common.sh@33 -- # echo 1024 00:04:14.777 20:59:52 -- setup/common.sh@33 -- # return 0 00:04:14.777 20:59:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.777 20:59:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.777 20:59:52 -- setup/hugepages.sh@27 -- # local node 00:04:14.777 20:59:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.777 20:59:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.777 20:59:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.777 20:59:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:14.777 20:59:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.777 20:59:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.777 20:59:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.777 20:59:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.777 20:59:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.777 20:59:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.777 20:59:52 -- setup/common.sh@18 -- # local node=0 00:04:14.777 20:59:52 -- setup/common.sh@19 -- # local var val 00:04:14.777 20:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.777 20:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.777 20:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.777 20:59:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.777 20:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.777 20:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51183596 kB' 'MemUsed: 14475412 kB' 'SwapCached: 0 kB' 'Active: 7019932 kB' 'Inactive: 3323792 kB' 'Active(anon): 6870692 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323792 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10115976 kB' 'Mapped: 61524 kB' 'AnonPages: 230896 kB' 'Shmem: 6642944 kB' 'KernelStack: 12632 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185428 kB' 'Slab: 702048 kB' 'SReclaimable: 185428 kB' 'SUnreclaim: 516620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 20:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # continue 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 20:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 20:59:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.778 20:59:52 -- setup/common.sh@33 -- # echo 0 00:04:14.778 20:59:52 -- setup/common.sh@33 -- # return 0 00:04:14.778 20:59:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.778 20:59:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.778 20:59:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.778 20:59:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.778 20:59:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.778 node0=1024 expecting 1024 00:04:14.778 20:59:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.778 00:04:14.778 real 0m7.592s 00:04:14.778 user 0m2.990s 00:04:14.778 sys 0m4.712s 00:04:14.778 20:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.778 20:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.039 ************************************ 00:04:15.039 END TEST no_shrink_alloc 00:04:15.039 ************************************ 00:04:15.039 20:59:52 -- setup/hugepages.sh@217 -- # clear_hp 00:04:15.039 20:59:52 -- setup/hugepages.sh@37 -- # local node hp 00:04:15.039 20:59:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.039 20:59:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.039 20:59:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.039 20:59:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.039 20:59:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.039 20:59:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.039 20:59:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.039 20:59:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.039 20:59:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.039 20:59:52 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.039 20:59:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.039 20:59:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.039 00:04:15.039 real 0m27.224s 00:04:15.039 user 0m10.641s 00:04:15.039 sys 0m16.959s 00:04:15.039 20:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.039 20:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.039 ************************************ 00:04:15.039 END TEST hugepages 00:04:15.039 ************************************ 00:04:15.039 20:59:52 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.039 20:59:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.039 20:59:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.039 20:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:15.039 ************************************ 00:04:15.039 START TEST driver 00:04:15.039 ************************************ 00:04:15.039 20:59:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.039 * Looking for test storage... 00:04:15.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.039 20:59:53 -- setup/driver.sh@68 -- # setup reset 00:04:15.039 20:59:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.039 20:59:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.327 20:59:57 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.327 20:59:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.327 20:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.327 20:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.327 ************************************ 00:04:20.327 START TEST guess_driver 00:04:20.327 ************************************ 00:04:20.327 20:59:57 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:20.327 20:59:57 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.327 20:59:57 -- setup/driver.sh@47 -- # local fail=0 00:04:20.327 20:59:57 -- setup/driver.sh@49 -- # pick_driver 00:04:20.327 20:59:57 -- setup/driver.sh@36 -- # vfio 00:04:20.327 20:59:57 -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.327 20:59:57 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.327 20:59:57 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.327 20:59:57 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.327 20:59:57 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.327 20:59:57 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:20.327 20:59:57 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.327 20:59:57 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.327 20:59:57 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.327 20:59:57 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.327 20:59:57 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.327 20:59:57 -- setup/driver.sh@30 -- # return 0 00:04:20.327 20:59:57 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.327 20:59:57 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.327 20:59:57 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.327 20:59:57 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.327 Looking for driver=vfio-pci 00:04:20.327 20:59:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.327 20:59:57 -- setup/driver.sh@45 -- # setup output config 00:04:20.327 20:59:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.327 20:59:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.636 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.636 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.636 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.637 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.637 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.637 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.637 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.637 21:00:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.637 21:00:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.637 21:00:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.966 21:00:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:23.966 21:00:01 -- setup/driver.sh@65 -- # setup reset 00:04:23.966 21:00:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.966 21:00:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.254 00:04:29.254 real 0m8.943s 00:04:29.254 user 0m3.036s 00:04:29.254 sys 0m5.080s 00:04:29.254 21:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.254 21:00:06 -- common/autotest_common.sh@10 -- # set +x 00:04:29.254 ************************************ 00:04:29.254 END TEST guess_driver 00:04:29.254 ************************************ 00:04:29.254 00:04:29.254 real 0m14.006s 00:04:29.254 user 0m4.551s 00:04:29.254 sys 0m7.824s 00:04:29.254 21:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.254 21:00:06 -- common/autotest_common.sh@10 -- # set +x 00:04:29.254 ************************************ 00:04:29.254 END TEST driver 00:04:29.254 ************************************ 00:04:29.254 21:00:06 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.254 21:00:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.254 21:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.254 21:00:06 -- common/autotest_common.sh@10 -- # set +x 00:04:29.254 ************************************ 00:04:29.254 START TEST devices 00:04:29.254 ************************************ 00:04:29.254 21:00:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.254 * Looking for test storage... 00:04:29.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.254 21:00:07 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.254 21:00:07 -- setup/devices.sh@192 -- # setup reset 00:04:29.254 21:00:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.254 21:00:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.465 21:00:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.465 21:00:11 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:33.465 21:00:11 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:33.465 21:00:11 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:33.465 21:00:11 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:33.465 21:00:11 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:33.465 21:00:11 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:33.465 21:00:11 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.465 21:00:11 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:33.465 21:00:11 -- setup/devices.sh@196 -- # blocks=() 00:04:33.465 21:00:11 -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.465 21:00:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.465 21:00:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.465 21:00:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.465 21:00:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.465 21:00:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.465 21:00:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.465 21:00:11 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:33.465 21:00:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:33.465 21:00:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.465 21:00:11 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:33.465 21:00:11 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.465 No valid GPT data, bailing 00:04:33.465 21:00:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.465 21:00:11 -- scripts/common.sh@393 -- # pt= 00:04:33.465 21:00:11 -- scripts/common.sh@394 -- # return 1 00:04:33.465 21:00:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.465 21:00:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.465 21:00:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.465 21:00:11 -- setup/common.sh@80 -- # echo 1920383410176 00:04:33.465 21:00:11 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:33.465 21:00:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.465 21:00:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:33.465 21:00:11 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.465 21:00:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.465 21:00:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.465 21:00:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.465 21:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.465 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:33.465 ************************************ 00:04:33.465 START TEST nvme_mount 00:04:33.465 ************************************ 00:04:33.465 21:00:11 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:33.465 21:00:11 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.465 21:00:11 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.465 21:00:11 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.465 21:00:11 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.465 21:00:11 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.465 21:00:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.465 21:00:11 -- setup/common.sh@40 -- # local part_no=1 00:04:33.465 21:00:11 -- setup/common.sh@41 -- # local size=1073741824 00:04:33.465 21:00:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.465 21:00:11 -- setup/common.sh@44 -- # parts=() 00:04:33.465 21:00:11 -- setup/common.sh@44 -- # local parts 00:04:33.465 21:00:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.465 21:00:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.465 21:00:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.465 21:00:11 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.465 21:00:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.465 21:00:11 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.465 21:00:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.465 21:00:11 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.407 Creating new GPT entries in memory. 00:04:34.407 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.407 other utilities. 00:04:34.407 21:00:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.407 21:00:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.407 21:00:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.407 21:00:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.407 21:00:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.349 Creating new GPT entries in memory. 00:04:35.349 The operation has completed successfully. 00:04:35.349 21:00:13 -- setup/common.sh@57 -- # (( part++ )) 00:04:35.349 21:00:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.349 21:00:13 -- setup/common.sh@62 -- # wait 2140435 00:04:35.349 21:00:13 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.349 21:00:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.349 21:00:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.349 21:00:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.349 21:00:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.349 21:00:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.349 21:00:13 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.349 21:00:13 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.349 21:00:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.349 21:00:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.349 21:00:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.349 21:00:13 -- setup/devices.sh@53 -- # local found=0 00:04:35.349 21:00:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.349 21:00:13 -- setup/devices.sh@56 -- # : 00:04:35.349 21:00:13 -- setup/devices.sh@59 -- # local pci status 00:04:35.349 21:00:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.349 21:00:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.349 21:00:13 -- setup/devices.sh@47 -- # setup output config 00:04:35.349 21:00:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.349 21:00:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.652 21:00:16 -- setup/devices.sh@63 -- # found=1 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.652 21:00:16 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.652 21:00:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.912 21:00:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.912 21:00:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.912 21:00:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.912 21:00:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.912 21:00:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.912 21:00:16 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.912 21:00:16 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.912 21:00:16 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.912 21:00:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.912 21:00:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.912 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.912 21:00:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.912 21:00:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.172 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.172 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.172 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.172 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.172 21:00:17 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:39.172 21:00:17 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:39.172 21:00:17 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.172 21:00:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.172 21:00:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.172 21:00:17 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.172 21:00:17 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.172 21:00:17 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.172 21:00:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.172 21:00:17 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.173 21:00:17 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.173 21:00:17 -- setup/devices.sh@53 -- # local found=0 00:04:39.173 21:00:17 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.173 21:00:17 -- setup/devices.sh@56 -- # : 00:04:39.173 21:00:17 -- setup/devices.sh@59 -- # local pci status 00:04:39.173 21:00:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.173 21:00:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.173 21:00:17 -- setup/devices.sh@47 -- # setup output config 00:04:39.173 21:00:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.173 21:00:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.475 21:00:20 -- setup/devices.sh@63 -- # found=1 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.475 21:00:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.475 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.737 21:00:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.737 21:00:20 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:42.737 21:00:20 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.737 21:00:20 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.737 21:00:20 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.737 21:00:20 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.737 21:00:20 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:42.737 21:00:20 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.737 21:00:20 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.737 21:00:20 -- setup/devices.sh@50 -- # local mount_point= 00:04:42.737 21:00:20 -- setup/devices.sh@51 -- # local test_file= 00:04:42.737 21:00:20 -- setup/devices.sh@53 -- # local found=0 00:04:42.737 21:00:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.737 21:00:20 -- setup/devices.sh@59 -- # local pci status 00:04:42.737 21:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.737 21:00:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.737 21:00:20 -- setup/devices.sh@47 -- # setup output config 00:04:42.737 21:00:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.737 21:00:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:46.034 21:00:23 -- setup/devices.sh@63 -- # found=1 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.034 21:00:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.034 21:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.296 21:00:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.296 21:00:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.296 21:00:24 -- setup/devices.sh@68 -- # return 0 00:04:46.296 21:00:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:46.296 21:00:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.296 21:00:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.296 21:00:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.296 21:00:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.296 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.296 00:04:46.296 real 0m13.192s 00:04:46.296 user 0m4.216s 00:04:46.296 sys 0m6.896s 00:04:46.296 21:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.296 21:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.296 ************************************ 00:04:46.296 END TEST nvme_mount 00:04:46.296 ************************************ 00:04:46.296 21:00:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:46.296 21:00:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.296 21:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.296 21:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.296 ************************************ 00:04:46.296 START TEST dm_mount 00:04:46.296 ************************************ 00:04:46.296 21:00:24 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:46.296 21:00:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:46.296 21:00:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:46.296 21:00:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:46.296 21:00:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:46.296 21:00:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.296 21:00:24 -- setup/common.sh@40 -- # local part_no=2 00:04:46.296 21:00:24 -- setup/common.sh@41 -- # local size=1073741824 00:04:46.296 21:00:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.296 21:00:24 -- setup/common.sh@44 -- # parts=() 00:04:46.296 21:00:24 -- setup/common.sh@44 -- # local parts 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.296 21:00:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.296 21:00:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.296 21:00:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.296 21:00:24 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.296 21:00:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.296 21:00:24 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:47.682 Creating new GPT entries in memory. 00:04:47.682 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.682 other utilities. 00:04:47.682 21:00:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.682 21:00:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.682 21:00:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.682 21:00:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.682 21:00:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:48.655 Creating new GPT entries in memory. 00:04:48.655 The operation has completed successfully. 00:04:48.655 21:00:26 -- setup/common.sh@57 -- # (( part++ )) 00:04:48.655 21:00:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.655 21:00:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.655 21:00:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.655 21:00:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:49.611 The operation has completed successfully. 00:04:49.611 21:00:27 -- setup/common.sh@57 -- # (( part++ )) 00:04:49.611 21:00:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.611 21:00:27 -- setup/common.sh@62 -- # wait 2145716 00:04:49.611 21:00:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:49.611 21:00:27 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.611 21:00:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.611 21:00:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:49.611 21:00:27 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:49.611 21:00:27 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.611 21:00:27 -- setup/devices.sh@161 -- # break 00:04:49.611 21:00:27 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.611 21:00:27 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:49.611 21:00:27 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:49.611 21:00:27 -- setup/devices.sh@166 -- # dm=dm-0 00:04:49.611 21:00:27 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:49.611 21:00:27 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:49.611 21:00:27 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.611 21:00:27 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:49.611 21:00:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.611 21:00:27 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.611 21:00:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:49.611 21:00:27 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.611 21:00:27 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.611 21:00:27 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:49.611 21:00:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:49.611 21:00:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:49.611 21:00:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:49.611 21:00:27 -- setup/devices.sh@53 -- # local found=0 00:04:49.611 21:00:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.611 21:00:27 -- setup/devices.sh@56 -- # : 00:04:49.611 21:00:27 -- setup/devices.sh@59 -- # local pci status 00:04:49.611 21:00:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.611 21:00:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:49.611 21:00:27 -- setup/devices.sh@47 -- # setup output config 00:04:49.611 21:00:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.611 21:00:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.916 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.916 21:00:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:52.916 21:00:30 -- setup/devices.sh@63 -- # found=1 00:04:52.916 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.916 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.917 21:00:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.917 21:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.178 21:00:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.178 21:00:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:53.178 21:00:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.178 21:00:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.178 21:00:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.178 21:00:31 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.178 21:00:31 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:53.178 21:00:31 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:53.178 21:00:31 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:53.178 21:00:31 -- setup/devices.sh@50 -- # local mount_point= 00:04:53.178 21:00:31 -- setup/devices.sh@51 -- # local test_file= 00:04:53.178 21:00:31 -- setup/devices.sh@53 -- # local found=0 00:04:53.178 21:00:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.178 21:00:31 -- setup/devices.sh@59 -- # local pci status 00:04:53.178 21:00:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.178 21:00:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:53.178 21:00:31 -- setup/devices.sh@47 -- # setup output config 00:04:53.178 21:00:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.178 21:00:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:56.479 21:00:34 -- setup/devices.sh@63 -- # found=1 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.479 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.479 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.480 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.480 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.480 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.480 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.480 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.480 21:00:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.480 21:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.062 21:00:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.062 21:00:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.062 21:00:34 -- setup/devices.sh@68 -- # return 0 00:04:57.062 21:00:34 -- setup/devices.sh@187 -- # cleanup_dm 00:04:57.062 21:00:34 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.062 21:00:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.062 21:00:34 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:57.062 21:00:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.062 21:00:34 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:57.062 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.062 21:00:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.062 21:00:34 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:57.062 00:04:57.062 real 0m10.545s 00:04:57.062 user 0m2.831s 00:04:57.062 sys 0m4.777s 00:04:57.062 21:00:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.062 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.062 ************************************ 00:04:57.062 END TEST dm_mount 00:04:57.062 ************************************ 00:04:57.062 21:00:34 -- setup/devices.sh@1 -- # cleanup 00:04:57.062 21:00:34 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:57.062 21:00:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.062 21:00:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.062 21:00:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:57.062 21:00:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.062 21:00:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.322 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:57.322 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:57.322 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:57.322 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:57.322 21:00:35 -- setup/devices.sh@12 -- # cleanup_dm 00:04:57.322 21:00:35 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.322 21:00:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.322 21:00:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.322 21:00:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.322 21:00:35 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.322 21:00:35 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:57.322 00:04:57.322 real 0m28.233s 00:04:57.322 user 0m8.560s 00:04:57.322 sys 0m14.541s 00:04:57.322 21:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.322 21:00:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.322 ************************************ 00:04:57.322 END TEST devices 00:04:57.322 ************************************ 00:04:57.322 00:04:57.322 real 1m34.723s 00:04:57.322 user 0m31.959s 00:04:57.322 sys 0m54.003s 00:04:57.322 21:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.322 21:00:35 -- common/autotest_common.sh@10 -- # set +x 00:04:57.322 ************************************ 00:04:57.322 END TEST setup.sh 00:04:57.322 ************************************ 00:04:57.322 21:00:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:00.625 Hugepages 00:05:00.625 node hugesize free / total 00:05:00.625 node0 1048576kB 0 / 0 00:05:00.625 node0 2048kB 2048 / 2048 00:05:00.625 node1 1048576kB 0 / 0 00:05:00.625 node1 2048kB 0 / 0 00:05:00.625 00:05:00.625 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.625 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:00.625 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:00.625 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:00.625 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:00.625 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:00.625 21:00:38 -- spdk/autotest.sh@141 -- # uname -s 00:05:00.885 21:00:38 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:00.885 21:00:38 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:00.886 21:00:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.190 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.190 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.450 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.450 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.450 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:06.363 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:06.363 21:00:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:07.305 21:00:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:07.305 21:00:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:07.305 21:00:45 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.305 21:00:45 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:07.305 21:00:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:07.305 21:00:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:07.305 21:00:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.305 21:00:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.305 21:00:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:07.566 21:00:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:07.566 21:00:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:07.566 21:00:45 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.870 Waiting for block devices as requested 00:05:10.870 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:10.870 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:10.870 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:11.171 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:11.171 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:11.171 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:11.171 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:11.431 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:11.431 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:11.431 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:11.692 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:11.692 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:11.692 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:11.951 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:11.951 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:11.951 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:11.951 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:12.210 21:00:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.210 21:00:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:12.210 21:00:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:12.210 21:00:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:12.210 21:00:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.210 21:00:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.210 21:00:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:12.210 21:00:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.210 21:00:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.210 21:00:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:12.210 21:00:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.210 21:00:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.210 21:00:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.210 21:00:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.210 21:00:50 -- common/autotest_common.sh@1542 -- # continue 00:05:12.210 21:00:50 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:12.210 21:00:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.210 21:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:12.470 21:00:50 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:12.470 21:00:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.470 21:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:12.470 21:00:50 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.775 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:15.775 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:16.355 21:00:54 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:16.355 21:00:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:16.355 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.355 21:00:54 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:16.355 21:00:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:16.355 21:00:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.355 21:00:54 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:16.355 21:00:54 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:16.355 21:00:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:16.355 21:00:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.355 21:00:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.355 21:00:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.355 21:00:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.355 21:00:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.355 21:00:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:16.355 21:00:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:16.355 21:00:54 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:16.355 21:00:54 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:16.355 21:00:54 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:16.355 21:00:54 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:16.355 21:00:54 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:16.355 21:00:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:16.355 21:00:54 -- common/autotest_common.sh@1578 -- # return 0 00:05:16.355 21:00:54 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:16.355 21:00:54 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:16.355 21:00:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:16.355 21:00:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:16.355 21:00:54 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:16.355 21:00:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.355 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.355 21:00:54 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.355 21:00:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.355 21:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.355 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.355 ************************************ 00:05:16.355 START TEST env 00:05:16.355 ************************************ 00:05:16.355 21:00:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.355 * Looking for test storage... 00:05:16.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:16.355 21:00:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.355 21:00:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.355 21:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.355 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.355 ************************************ 00:05:16.355 START TEST env_memory 00:05:16.355 ************************************ 00:05:16.355 21:00:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.355 00:05:16.355 00:05:16.355 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.355 http://cunit.sourceforge.net/ 00:05:16.355 00:05:16.355 00:05:16.355 Suite: memory 00:05:16.625 Test: alloc and free memory map ...[2024-06-08 21:00:54.453949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.625 passed 00:05:16.625 Test: mem map translation ...[2024-06-08 21:00:54.479606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.625 [2024-06-08 21:00:54.479634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.626 [2024-06-08 21:00:54.479683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.626 [2024-06-08 21:00:54.479692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.626 passed 00:05:16.626 Test: mem map registration ...[2024-06-08 21:00:54.535012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:16.626 [2024-06-08 21:00:54.535035] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:16.626 passed 00:05:16.626 Test: mem map adjacent registrations ...passed 00:05:16.626 00:05:16.626 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.626 suites 1 1 n/a 0 0 00:05:16.626 tests 4 4 4 0 0 00:05:16.626 asserts 152 152 152 0 n/a 00:05:16.626 00:05:16.626 Elapsed time = 0.202 seconds 00:05:16.626 00:05:16.626 real 0m0.216s 00:05:16.626 user 0m0.204s 00:05:16.626 sys 0m0.011s 00:05:16.626 21:00:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.626 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 END TEST env_memory 00:05:16.626 ************************************ 00:05:16.626 21:00:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.626 21:00:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.626 21:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.626 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:16.626 ************************************ 00:05:16.626 START TEST env_vtophys 00:05:16.626 ************************************ 00:05:16.626 21:00:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.626 EAL: lib.eal log level changed from notice to debug 00:05:16.626 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.626 EAL: Detected lcore 1 as core 1 on socket 0 00:05:16.626 EAL: Detected lcore 2 as core 2 on socket 0 00:05:16.626 EAL: Detected lcore 3 as core 3 on socket 0 00:05:16.626 EAL: Detected lcore 4 as core 4 on socket 0 00:05:16.626 EAL: Detected lcore 5 as core 5 on socket 0 00:05:16.626 EAL: Detected lcore 6 as core 6 on socket 0 00:05:16.626 EAL: Detected lcore 7 as core 7 on socket 0 00:05:16.626 EAL: Detected lcore 8 as core 8 on socket 0 00:05:16.626 EAL: Detected lcore 9 as core 9 on socket 0 00:05:16.626 EAL: Detected lcore 10 as core 10 on socket 0 00:05:16.626 EAL: Detected lcore 11 as core 11 on socket 0 00:05:16.626 EAL: Detected lcore 12 as core 12 on socket 0 00:05:16.626 EAL: Detected lcore 13 as core 13 on socket 0 00:05:16.626 EAL: Detected lcore 14 as core 14 on socket 0 00:05:16.626 EAL: Detected lcore 15 as core 15 on socket 0 00:05:16.626 EAL: Detected lcore 16 as core 16 on socket 0 00:05:16.626 EAL: Detected lcore 17 as core 17 on socket 0 00:05:16.626 EAL: Detected lcore 18 as core 18 on socket 0 00:05:16.626 EAL: Detected lcore 19 as core 19 on socket 0 00:05:16.626 EAL: Detected lcore 20 as core 20 on socket 0 00:05:16.626 EAL: Detected lcore 21 as core 21 on socket 0 00:05:16.626 EAL: Detected lcore 22 as core 22 on socket 0 00:05:16.626 EAL: Detected lcore 23 as core 23 on socket 0 00:05:16.626 EAL: Detected lcore 24 as core 24 on socket 0 00:05:16.626 EAL: Detected lcore 25 as core 25 on socket 0 00:05:16.626 EAL: Detected lcore 26 as core 26 on socket 0 00:05:16.626 EAL: Detected lcore 27 as core 27 on socket 0 00:05:16.626 EAL: Detected lcore 28 as core 28 on socket 0 00:05:16.626 EAL: Detected lcore 29 as core 29 on socket 0 00:05:16.626 EAL: Detected lcore 30 as core 30 on socket 0 00:05:16.626 EAL: Detected lcore 31 as core 31 on socket 0 00:05:16.627 EAL: Detected lcore 32 as core 32 on socket 0 00:05:16.627 EAL: Detected lcore 33 as core 33 on socket 0 00:05:16.627 EAL: Detected lcore 34 as core 34 on socket 0 00:05:16.627 EAL: Detected lcore 35 as core 35 on socket 0 00:05:16.627 EAL: Detected lcore 36 as core 0 on socket 1 00:05:16.627 EAL: Detected lcore 37 as core 1 on socket 1 00:05:16.627 EAL: Detected lcore 38 as core 2 on socket 1 00:05:16.627 EAL: Detected lcore 39 as core 3 on socket 1 00:05:16.627 EAL: Detected lcore 40 as core 4 on socket 1 00:05:16.627 EAL: Detected lcore 41 as core 5 on socket 1 00:05:16.627 EAL: Detected lcore 42 as core 6 on socket 1 00:05:16.627 EAL: Detected lcore 43 as core 7 on socket 1 00:05:16.627 EAL: Detected lcore 44 as core 8 on socket 1 00:05:16.627 EAL: Detected lcore 45 as core 9 on socket 1 00:05:16.627 EAL: Detected lcore 46 as core 10 on socket 1 00:05:16.627 EAL: Detected lcore 47 as core 11 on socket 1 00:05:16.627 EAL: Detected lcore 48 as core 12 on socket 1 00:05:16.627 EAL: Detected lcore 49 as core 13 on socket 1 00:05:16.627 EAL: Detected lcore 50 as core 14 on socket 1 00:05:16.627 EAL: Detected lcore 51 as core 15 on socket 1 00:05:16.627 EAL: Detected lcore 52 as core 16 on socket 1 00:05:16.627 EAL: Detected lcore 53 as core 17 on socket 1 00:05:16.627 EAL: Detected lcore 54 as core 18 on socket 1 00:05:16.627 EAL: Detected lcore 55 as core 19 on socket 1 00:05:16.627 EAL: Detected lcore 56 as core 20 on socket 1 00:05:16.627 EAL: Detected lcore 57 as core 21 on socket 1 00:05:16.627 EAL: Detected lcore 58 as core 22 on socket 1 00:05:16.627 EAL: Detected lcore 59 as core 23 on socket 1 00:05:16.627 EAL: Detected lcore 60 as core 24 on socket 1 00:05:16.627 EAL: Detected lcore 61 as core 25 on socket 1 00:05:16.627 EAL: Detected lcore 62 as core 26 on socket 1 00:05:16.627 EAL: Detected lcore 63 as core 27 on socket 1 00:05:16.627 EAL: Detected lcore 64 as core 28 on socket 1 00:05:16.627 EAL: Detected lcore 65 as core 29 on socket 1 00:05:16.627 EAL: Detected lcore 66 as core 30 on socket 1 00:05:16.627 EAL: Detected lcore 67 as core 31 on socket 1 00:05:16.627 EAL: Detected lcore 68 as core 32 on socket 1 00:05:16.627 EAL: Detected lcore 69 as core 33 on socket 1 00:05:16.627 EAL: Detected lcore 70 as core 34 on socket 1 00:05:16.627 EAL: Detected lcore 71 as core 35 on socket 1 00:05:16.627 EAL: Detected lcore 72 as core 0 on socket 0 00:05:16.627 EAL: Detected lcore 73 as core 1 on socket 0 00:05:16.627 EAL: Detected lcore 74 as core 2 on socket 0 00:05:16.627 EAL: Detected lcore 75 as core 3 on socket 0 00:05:16.627 EAL: Detected lcore 76 as core 4 on socket 0 00:05:16.627 EAL: Detected lcore 77 as core 5 on socket 0 00:05:16.627 EAL: Detected lcore 78 as core 6 on socket 0 00:05:16.627 EAL: Detected lcore 79 as core 7 on socket 0 00:05:16.627 EAL: Detected lcore 80 as core 8 on socket 0 00:05:16.627 EAL: Detected lcore 81 as core 9 on socket 0 00:05:16.627 EAL: Detected lcore 82 as core 10 on socket 0 00:05:16.627 EAL: Detected lcore 83 as core 11 on socket 0 00:05:16.627 EAL: Detected lcore 84 as core 12 on socket 0 00:05:16.627 EAL: Detected lcore 85 as core 13 on socket 0 00:05:16.627 EAL: Detected lcore 86 as core 14 on socket 0 00:05:16.627 EAL: Detected lcore 87 as core 15 on socket 0 00:05:16.627 EAL: Detected lcore 88 as core 16 on socket 0 00:05:16.627 EAL: Detected lcore 89 as core 17 on socket 0 00:05:16.627 EAL: Detected lcore 90 as core 18 on socket 0 00:05:16.627 EAL: Detected lcore 91 as core 19 on socket 0 00:05:16.627 EAL: Detected lcore 92 as core 20 on socket 0 00:05:16.627 EAL: Detected lcore 93 as core 21 on socket 0 00:05:16.627 EAL: Detected lcore 94 as core 22 on socket 0 00:05:16.627 EAL: Detected lcore 95 as core 23 on socket 0 00:05:16.627 EAL: Detected lcore 96 as core 24 on socket 0 00:05:16.627 EAL: Detected lcore 97 as core 25 on socket 0 00:05:16.627 EAL: Detected lcore 98 as core 26 on socket 0 00:05:16.627 EAL: Detected lcore 99 as core 27 on socket 0 00:05:16.627 EAL: Detected lcore 100 as core 28 on socket 0 00:05:16.627 EAL: Detected lcore 101 as core 29 on socket 0 00:05:16.627 EAL: Detected lcore 102 as core 30 on socket 0 00:05:16.627 EAL: Detected lcore 103 as core 31 on socket 0 00:05:16.627 EAL: Detected lcore 104 as core 32 on socket 0 00:05:16.627 EAL: Detected lcore 105 as core 33 on socket 0 00:05:16.627 EAL: Detected lcore 106 as core 34 on socket 0 00:05:16.627 EAL: Detected lcore 107 as core 35 on socket 0 00:05:16.627 EAL: Detected lcore 108 as core 0 on socket 1 00:05:16.627 EAL: Detected lcore 109 as core 1 on socket 1 00:05:16.627 EAL: Detected lcore 110 as core 2 on socket 1 00:05:16.627 EAL: Detected lcore 111 as core 3 on socket 1 00:05:16.627 EAL: Detected lcore 112 as core 4 on socket 1 00:05:16.627 EAL: Detected lcore 113 as core 5 on socket 1 00:05:16.627 EAL: Detected lcore 114 as core 6 on socket 1 00:05:16.627 EAL: Detected lcore 115 as core 7 on socket 1 00:05:16.627 EAL: Detected lcore 116 as core 8 on socket 1 00:05:16.627 EAL: Detected lcore 117 as core 9 on socket 1 00:05:16.627 EAL: Detected lcore 118 as core 10 on socket 1 00:05:16.627 EAL: Detected lcore 119 as core 11 on socket 1 00:05:16.627 EAL: Detected lcore 120 as core 12 on socket 1 00:05:16.627 EAL: Detected lcore 121 as core 13 on socket 1 00:05:16.627 EAL: Detected lcore 122 as core 14 on socket 1 00:05:16.627 EAL: Detected lcore 123 as core 15 on socket 1 00:05:16.627 EAL: Detected lcore 124 as core 16 on socket 1 00:05:16.627 EAL: Detected lcore 125 as core 17 on socket 1 00:05:16.627 EAL: Detected lcore 126 as core 18 on socket 1 00:05:16.627 EAL: Detected lcore 127 as core 19 on socket 1 00:05:16.627 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:16.627 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:16.627 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:16.627 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:16.627 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:16.627 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:16.627 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:16.627 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:16.627 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:16.627 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:16.627 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:16.627 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:16.627 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:16.627 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:16.627 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:16.627 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:16.627 EAL: Maximum logical cores by configuration: 128 00:05:16.627 EAL: Detected CPU lcores: 128 00:05:16.627 EAL: Detected NUMA nodes: 2 00:05:16.627 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:16.627 EAL: Detected shared linkage of DPDK 00:05:16.627 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.627 EAL: Bus pci wants IOVA as 'DC' 00:05:16.627 EAL: Buses did not request a specific IOVA mode. 00:05:16.627 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:16.627 EAL: Selected IOVA mode 'VA' 00:05:16.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.627 EAL: Probing VFIO support... 00:05:16.627 EAL: IOMMU type 1 (Type 1) is supported 00:05:16.627 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:16.627 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:16.627 EAL: VFIO support initialized 00:05:16.627 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.627 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.627 EAL: Setting up physically contiguous memory... 00:05:16.627 EAL: Setting maximum number of open files to 524288 00:05:16.627 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.627 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:16.627 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.627 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.627 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.627 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.627 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.627 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.627 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.628 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:16.628 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.628 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:16.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.628 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.628 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:16.628 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:16.628 EAL: Hugepages will be freed exactly as allocated. 00:05:16.628 EAL: No shared files mode enabled, IPC is disabled 00:05:16.628 EAL: No shared files mode enabled, IPC is disabled 00:05:16.628 EAL: TSC frequency is ~2400000 KHz 00:05:16.628 EAL: Main lcore 0 is ready (tid=7f5adee1fa00;cpuset=[0]) 00:05:16.628 EAL: Trying to obtain current memory policy. 00:05:16.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.628 EAL: Restoring previous memory policy: 0 00:05:16.628 EAL: request: mp_malloc_sync 00:05:16.628 EAL: No shared files mode enabled, IPC is disabled 00:05:16.628 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.628 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.889 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.889 00:05:16.889 00:05:16.889 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.889 http://cunit.sourceforge.net/ 00:05:16.889 00:05:16.889 00:05:16.889 Suite: components_suite 00:05:16.889 Test: vtophys_malloc_test ...passed 00:05:16.889 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:16.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.889 EAL: Restoring previous memory policy: 4 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was expanded by 4MB 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was shrunk by 4MB 00:05:16.889 EAL: Trying to obtain current memory policy. 00:05:16.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.889 EAL: Restoring previous memory policy: 4 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was expanded by 6MB 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was shrunk by 6MB 00:05:16.889 EAL: Trying to obtain current memory policy. 00:05:16.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.889 EAL: Restoring previous memory policy: 4 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was expanded by 10MB 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was shrunk by 10MB 00:05:16.889 EAL: Trying to obtain current memory policy. 00:05:16.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.889 EAL: Restoring previous memory policy: 4 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was expanded by 18MB 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was shrunk by 18MB 00:05:16.889 EAL: Trying to obtain current memory policy. 00:05:16.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.889 EAL: Restoring previous memory policy: 4 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.889 EAL: Heap on socket 0 was expanded by 34MB 00:05:16.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.889 EAL: request: mp_malloc_sync 00:05:16.889 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was shrunk by 34MB 00:05:16.890 EAL: Trying to obtain current memory policy. 00:05:16.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.890 EAL: Restoring previous memory policy: 4 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was expanded by 66MB 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was shrunk by 66MB 00:05:16.890 EAL: Trying to obtain current memory policy. 00:05:16.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.890 EAL: Restoring previous memory policy: 4 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was expanded by 130MB 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was shrunk by 130MB 00:05:16.890 EAL: Trying to obtain current memory policy. 00:05:16.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.890 EAL: Restoring previous memory policy: 4 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was shrunk by 258MB 00:05:16.890 EAL: Trying to obtain current memory policy. 00:05:16.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.890 EAL: Restoring previous memory policy: 4 00:05:16.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.890 EAL: request: mp_malloc_sync 00:05:16.890 EAL: No shared files mode enabled, IPC is disabled 00:05:16.890 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.150 EAL: request: mp_malloc_sync 00:05:17.150 EAL: No shared files mode enabled, IPC is disabled 00:05:17.150 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.150 EAL: Trying to obtain current memory policy. 00:05:17.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.150 EAL: Restoring previous memory policy: 4 00:05:17.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.150 EAL: request: mp_malloc_sync 00:05:17.150 EAL: No shared files mode enabled, IPC is disabled 00:05:17.150 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.410 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.410 EAL: request: mp_malloc_sync 00:05:17.410 EAL: No shared files mode enabled, IPC is disabled 00:05:17.410 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:17.410 passed 00:05:17.410 00:05:17.410 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.410 suites 1 1 n/a 0 0 00:05:17.410 tests 2 2 2 0 0 00:05:17.410 asserts 497 497 497 0 n/a 00:05:17.410 00:05:17.410 Elapsed time = 0.646 seconds 00:05:17.410 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.410 EAL: request: mp_malloc_sync 00:05:17.410 EAL: No shared files mode enabled, IPC is disabled 00:05:17.410 EAL: Heap on socket 0 was shrunk by 2MB 00:05:17.410 EAL: No shared files mode enabled, IPC is disabled 00:05:17.410 EAL: No shared files mode enabled, IPC is disabled 00:05:17.410 EAL: No shared files mode enabled, IPC is disabled 00:05:17.410 00:05:17.410 real 0m0.765s 00:05:17.410 user 0m0.404s 00:05:17.410 sys 0m0.334s 00:05:17.410 21:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.410 21:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 ************************************ 00:05:17.410 END TEST env_vtophys 00:05:17.410 ************************************ 00:05:17.410 21:00:55 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.410 21:00:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.410 21:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.410 21:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 ************************************ 00:05:17.410 START TEST env_pci 00:05:17.410 ************************************ 00:05:17.410 21:00:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:17.410 00:05:17.410 00:05:17.410 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.410 http://cunit.sourceforge.net/ 00:05:17.410 00:05:17.410 00:05:17.410 Suite: pci 00:05:17.410 Test: pci_hook ...[2024-06-08 21:00:55.481061] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2157216 has claimed it 00:05:17.671 EAL: Cannot find device (10000:00:01.0) 00:05:17.671 EAL: Failed to attach device on primary process 00:05:17.671 passed 00:05:17.671 00:05:17.671 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.671 suites 1 1 n/a 0 0 00:05:17.671 tests 1 1 1 0 0 00:05:17.671 asserts 25 25 25 0 n/a 00:05:17.671 00:05:17.671 Elapsed time = 0.029 seconds 00:05:17.671 00:05:17.671 real 0m0.050s 00:05:17.671 user 0m0.019s 00:05:17.671 sys 0m0.030s 00:05:17.671 21:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.671 21:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 ************************************ 00:05:17.671 END TEST env_pci 00:05:17.671 ************************************ 00:05:17.671 21:00:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:17.671 21:00:55 -- env/env.sh@15 -- # uname 00:05:17.671 21:00:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:17.671 21:00:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:17.671 21:00:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.671 21:00:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:17.671 21:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.671 21:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.671 ************************************ 00:05:17.671 START TEST env_dpdk_post_init 00:05:17.671 ************************************ 00:05:17.671 21:00:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:17.671 EAL: Detected CPU lcores: 128 00:05:17.671 EAL: Detected NUMA nodes: 2 00:05:17.671 EAL: Detected shared linkage of DPDK 00:05:17.671 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.671 EAL: Selected IOVA mode 'VA' 00:05:17.671 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.671 EAL: VFIO support initialized 00:05:17.671 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.671 EAL: Using IOMMU type 1 (Type 1) 00:05:17.932 EAL: Ignore mapping IO port bar(1) 00:05:17.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:17.932 EAL: Ignore mapping IO port bar(1) 00:05:18.192 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:18.192 EAL: Ignore mapping IO port bar(1) 00:05:18.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:18.452 EAL: Ignore mapping IO port bar(1) 00:05:18.719 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:18.719 EAL: Ignore mapping IO port bar(1) 00:05:18.719 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:18.985 EAL: Ignore mapping IO port bar(1) 00:05:18.985 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:19.245 EAL: Ignore mapping IO port bar(1) 00:05:19.245 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:19.506 EAL: Ignore mapping IO port bar(1) 00:05:19.506 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:19.767 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:19.767 EAL: Ignore mapping IO port bar(1) 00:05:20.027 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:20.027 EAL: Ignore mapping IO port bar(1) 00:05:20.288 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:20.288 EAL: Ignore mapping IO port bar(1) 00:05:20.288 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:20.549 EAL: Ignore mapping IO port bar(1) 00:05:20.549 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:20.810 EAL: Ignore mapping IO port bar(1) 00:05:20.810 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:21.071 EAL: Ignore mapping IO port bar(1) 00:05:21.071 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:21.071 EAL: Ignore mapping IO port bar(1) 00:05:21.332 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:21.332 EAL: Ignore mapping IO port bar(1) 00:05:21.593 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:21.593 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:21.593 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:21.593 Starting DPDK initialization... 00:05:21.593 Starting SPDK post initialization... 00:05:21.593 SPDK NVMe probe 00:05:21.593 Attaching to 0000:65:00.0 00:05:21.593 Attached to 0000:65:00.0 00:05:21.593 Cleaning up... 00:05:23.511 00:05:23.511 real 0m5.711s 00:05:23.511 user 0m0.175s 00:05:23.511 sys 0m0.080s 00:05:23.511 21:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.512 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.512 ************************************ 00:05:23.512 END TEST env_dpdk_post_init 00:05:23.512 ************************************ 00:05:23.512 21:01:01 -- env/env.sh@26 -- # uname 00:05:23.512 21:01:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.512 21:01:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.512 21:01:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.512 21:01:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.512 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.512 ************************************ 00:05:23.512 START TEST env_mem_callbacks 00:05:23.512 ************************************ 00:05:23.512 21:01:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.512 EAL: Detected CPU lcores: 128 00:05:23.512 EAL: Detected NUMA nodes: 2 00:05:23.512 EAL: Detected shared linkage of DPDK 00:05:23.512 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.512 EAL: Selected IOVA mode 'VA' 00:05:23.512 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.512 EAL: VFIO support initialized 00:05:23.512 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.512 00:05:23.512 00:05:23.512 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.512 http://cunit.sourceforge.net/ 00:05:23.512 00:05:23.512 00:05:23.512 Suite: memory 00:05:23.512 Test: test ... 00:05:23.512 register 0x200000200000 2097152 00:05:23.512 malloc 3145728 00:05:23.512 register 0x200000400000 4194304 00:05:23.512 buf 0x200000500000 len 3145728 PASSED 00:05:23.512 malloc 64 00:05:23.512 buf 0x2000004fff40 len 64 PASSED 00:05:23.512 malloc 4194304 00:05:23.512 register 0x200000800000 6291456 00:05:23.512 buf 0x200000a00000 len 4194304 PASSED 00:05:23.512 free 0x200000500000 3145728 00:05:23.512 free 0x2000004fff40 64 00:05:23.512 unregister 0x200000400000 4194304 PASSED 00:05:23.512 free 0x200000a00000 4194304 00:05:23.512 unregister 0x200000800000 6291456 PASSED 00:05:23.512 malloc 8388608 00:05:23.512 register 0x200000400000 10485760 00:05:23.512 buf 0x200000600000 len 8388608 PASSED 00:05:23.512 free 0x200000600000 8388608 00:05:23.512 unregister 0x200000400000 10485760 PASSED 00:05:23.512 passed 00:05:23.512 00:05:23.512 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.512 suites 1 1 n/a 0 0 00:05:23.513 tests 1 1 1 0 0 00:05:23.513 asserts 15 15 15 0 n/a 00:05:23.513 00:05:23.513 Elapsed time = 0.008 seconds 00:05:23.513 00:05:23.513 real 0m0.067s 00:05:23.513 user 0m0.024s 00:05:23.513 sys 0m0.043s 00:05:23.513 21:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.513 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.513 ************************************ 00:05:23.513 END TEST env_mem_callbacks 00:05:23.513 ************************************ 00:05:23.513 00:05:23.513 real 0m7.140s 00:05:23.513 user 0m0.944s 00:05:23.513 sys 0m0.752s 00:05:23.513 21:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.513 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.513 ************************************ 00:05:23.513 END TEST env 00:05:23.513 ************************************ 00:05:23.513 21:01:01 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.513 21:01:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.513 21:01:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.513 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.513 ************************************ 00:05:23.513 START TEST rpc 00:05:23.513 ************************************ 00:05:23.513 21:01:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:23.513 * Looking for test storage... 00:05:23.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.513 21:01:01 -- rpc/rpc.sh@65 -- # spdk_pid=2158624 00:05:23.513 21:01:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.514 21:01:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:23.514 21:01:01 -- rpc/rpc.sh@67 -- # waitforlisten 2158624 00:05:23.514 21:01:01 -- common/autotest_common.sh@819 -- # '[' -z 2158624 ']' 00:05:23.514 21:01:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.514 21:01:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.514 21:01:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.514 21:01:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.514 21:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.797 [2024-06-08 21:01:01.623887] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:23.797 [2024-06-08 21:01:01.623945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158624 ] 00:05:23.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.797 [2024-06-08 21:01:01.682598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.797 [2024-06-08 21:01:01.746958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.797 [2024-06-08 21:01:01.747076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.797 [2024-06-08 21:01:01.747085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2158624' to capture a snapshot of events at runtime. 00:05:23.797 [2024-06-08 21:01:01.747091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2158624 for offline analysis/debug. 00:05:23.797 [2024-06-08 21:01:01.747111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.367 21:01:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.367 21:01:02 -- common/autotest_common.sh@852 -- # return 0 00:05:24.367 21:01:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.367 21:01:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:24.367 21:01:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:24.367 21:01:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:24.367 21:01:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.367 21:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.367 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.367 ************************************ 00:05:24.367 START TEST rpc_integrity 00:05:24.367 ************************************ 00:05:24.367 21:01:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:24.367 21:01:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.367 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.367 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.367 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.367 21:01:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.367 21:01:02 -- rpc/rpc.sh@13 -- # jq length 00:05:24.367 21:01:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.367 21:01:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.367 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.367 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.367 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.367 21:01:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:24.367 21:01:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.367 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.367 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.628 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.628 21:01:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.628 { 00:05:24.628 "name": "Malloc0", 00:05:24.628 "aliases": [ 00:05:24.628 "600774d5-3224-485c-8b92-4f0e9695250e" 00:05:24.628 ], 00:05:24.628 "product_name": "Malloc disk", 00:05:24.628 "block_size": 512, 00:05:24.628 "num_blocks": 16384, 00:05:24.628 "uuid": "600774d5-3224-485c-8b92-4f0e9695250e", 00:05:24.628 "assigned_rate_limits": { 00:05:24.628 "rw_ios_per_sec": 0, 00:05:24.628 "rw_mbytes_per_sec": 0, 00:05:24.628 "r_mbytes_per_sec": 0, 00:05:24.628 "w_mbytes_per_sec": 0 00:05:24.628 }, 00:05:24.628 "claimed": false, 00:05:24.628 "zoned": false, 00:05:24.628 "supported_io_types": { 00:05:24.628 "read": true, 00:05:24.628 "write": true, 00:05:24.628 "unmap": true, 00:05:24.628 "write_zeroes": true, 00:05:24.628 "flush": true, 00:05:24.628 "reset": true, 00:05:24.628 "compare": false, 00:05:24.628 "compare_and_write": false, 00:05:24.628 "abort": true, 00:05:24.628 "nvme_admin": false, 00:05:24.628 "nvme_io": false 00:05:24.628 }, 00:05:24.628 "memory_domains": [ 00:05:24.628 { 00:05:24.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.628 "dma_device_type": 2 00:05:24.628 } 00:05:24.628 ], 00:05:24.628 "driver_specific": {} 00:05:24.628 } 00:05:24.628 ]' 00:05:24.628 21:01:02 -- rpc/rpc.sh@17 -- # jq length 00:05:24.628 21:01:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.628 21:01:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.628 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.628 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.628 [2024-06-08 21:01:02.514103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.628 [2024-06-08 21:01:02.514137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.628 [2024-06-08 21:01:02.514149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18aa470 00:05:24.628 [2024-06-08 21:01:02.514156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.628 [2024-06-08 21:01:02.515509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.628 [2024-06-08 21:01:02.515531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.628 Passthru0 00:05:24.628 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.628 21:01:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.628 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.628 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.628 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.628 21:01:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.628 { 00:05:24.628 "name": "Malloc0", 00:05:24.628 "aliases": [ 00:05:24.628 "600774d5-3224-485c-8b92-4f0e9695250e" 00:05:24.628 ], 00:05:24.628 "product_name": "Malloc disk", 00:05:24.628 "block_size": 512, 00:05:24.628 "num_blocks": 16384, 00:05:24.628 "uuid": "600774d5-3224-485c-8b92-4f0e9695250e", 00:05:24.628 "assigned_rate_limits": { 00:05:24.628 "rw_ios_per_sec": 0, 00:05:24.628 "rw_mbytes_per_sec": 0, 00:05:24.628 "r_mbytes_per_sec": 0, 00:05:24.628 "w_mbytes_per_sec": 0 00:05:24.628 }, 00:05:24.628 "claimed": true, 00:05:24.628 "claim_type": "exclusive_write", 00:05:24.628 "zoned": false, 00:05:24.628 "supported_io_types": { 00:05:24.628 "read": true, 00:05:24.628 "write": true, 00:05:24.628 "unmap": true, 00:05:24.628 "write_zeroes": true, 00:05:24.628 "flush": true, 00:05:24.628 "reset": true, 00:05:24.628 "compare": false, 00:05:24.628 "compare_and_write": false, 00:05:24.628 "abort": true, 00:05:24.628 "nvme_admin": false, 00:05:24.628 "nvme_io": false 00:05:24.628 }, 00:05:24.628 "memory_domains": [ 00:05:24.628 { 00:05:24.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.628 "dma_device_type": 2 00:05:24.628 } 00:05:24.628 ], 00:05:24.628 "driver_specific": {} 00:05:24.628 }, 00:05:24.628 { 00:05:24.628 "name": "Passthru0", 00:05:24.628 "aliases": [ 00:05:24.628 "ca79cfb9-bcae-535c-9db3-0d8402da78c4" 00:05:24.628 ], 00:05:24.628 "product_name": "passthru", 00:05:24.628 "block_size": 512, 00:05:24.628 "num_blocks": 16384, 00:05:24.628 "uuid": "ca79cfb9-bcae-535c-9db3-0d8402da78c4", 00:05:24.628 "assigned_rate_limits": { 00:05:24.628 "rw_ios_per_sec": 0, 00:05:24.628 "rw_mbytes_per_sec": 0, 00:05:24.628 "r_mbytes_per_sec": 0, 00:05:24.628 "w_mbytes_per_sec": 0 00:05:24.628 }, 00:05:24.628 "claimed": false, 00:05:24.628 "zoned": false, 00:05:24.628 "supported_io_types": { 00:05:24.628 "read": true, 00:05:24.628 "write": true, 00:05:24.628 "unmap": true, 00:05:24.628 "write_zeroes": true, 00:05:24.628 "flush": true, 00:05:24.628 "reset": true, 00:05:24.628 "compare": false, 00:05:24.628 "compare_and_write": false, 00:05:24.628 "abort": true, 00:05:24.628 "nvme_admin": false, 00:05:24.628 "nvme_io": false 00:05:24.628 }, 00:05:24.628 "memory_domains": [ 00:05:24.628 { 00:05:24.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.628 "dma_device_type": 2 00:05:24.628 } 00:05:24.628 ], 00:05:24.628 "driver_specific": { 00:05:24.628 "passthru": { 00:05:24.628 "name": "Passthru0", 00:05:24.628 "base_bdev_name": "Malloc0" 00:05:24.628 } 00:05:24.628 } 00:05:24.628 } 00:05:24.628 ]' 00:05:24.628 21:01:02 -- rpc/rpc.sh@21 -- # jq length 00:05:24.628 21:01:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.628 21:01:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.628 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.628 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.628 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.628 21:01:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.628 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.628 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.628 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.629 21:01:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.629 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.629 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.629 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.629 21:01:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.629 21:01:02 -- rpc/rpc.sh@26 -- # jq length 00:05:24.629 21:01:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.629 00:05:24.629 real 0m0.280s 00:05:24.629 user 0m0.180s 00:05:24.629 sys 0m0.035s 00:05:24.629 21:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.629 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.629 ************************************ 00:05:24.629 END TEST rpc_integrity 00:05:24.629 ************************************ 00:05:24.629 21:01:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.629 21:01:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.629 21:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.629 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.629 ************************************ 00:05:24.629 START TEST rpc_plugins 00:05:24.629 ************************************ 00:05:24.629 21:01:02 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:24.629 21:01:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.629 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.629 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.629 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.629 21:01:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.629 21:01:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.629 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.629 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.890 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.890 21:01:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.890 { 00:05:24.890 "name": "Malloc1", 00:05:24.890 "aliases": [ 00:05:24.890 "766b9ff6-265d-4b8c-8ba7-0cb862305696" 00:05:24.890 ], 00:05:24.890 "product_name": "Malloc disk", 00:05:24.890 "block_size": 4096, 00:05:24.890 "num_blocks": 256, 00:05:24.890 "uuid": "766b9ff6-265d-4b8c-8ba7-0cb862305696", 00:05:24.890 "assigned_rate_limits": { 00:05:24.890 "rw_ios_per_sec": 0, 00:05:24.890 "rw_mbytes_per_sec": 0, 00:05:24.890 "r_mbytes_per_sec": 0, 00:05:24.890 "w_mbytes_per_sec": 0 00:05:24.890 }, 00:05:24.890 "claimed": false, 00:05:24.890 "zoned": false, 00:05:24.890 "supported_io_types": { 00:05:24.890 "read": true, 00:05:24.890 "write": true, 00:05:24.890 "unmap": true, 00:05:24.890 "write_zeroes": true, 00:05:24.890 "flush": true, 00:05:24.890 "reset": true, 00:05:24.890 "compare": false, 00:05:24.890 "compare_and_write": false, 00:05:24.890 "abort": true, 00:05:24.890 "nvme_admin": false, 00:05:24.890 "nvme_io": false 00:05:24.890 }, 00:05:24.890 "memory_domains": [ 00:05:24.890 { 00:05:24.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.890 "dma_device_type": 2 00:05:24.890 } 00:05:24.890 ], 00:05:24.890 "driver_specific": {} 00:05:24.890 } 00:05:24.890 ]' 00:05:24.890 21:01:02 -- rpc/rpc.sh@32 -- # jq length 00:05:24.890 21:01:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.890 21:01:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.890 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.890 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.890 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.890 21:01:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.890 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.890 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.890 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.890 21:01:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.890 21:01:02 -- rpc/rpc.sh@36 -- # jq length 00:05:24.890 21:01:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.890 00:05:24.890 real 0m0.145s 00:05:24.890 user 0m0.093s 00:05:24.890 sys 0m0.020s 00:05:24.890 21:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.890 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.890 ************************************ 00:05:24.890 END TEST rpc_plugins 00:05:24.890 ************************************ 00:05:24.890 21:01:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.890 21:01:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.890 21:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.891 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.891 ************************************ 00:05:24.891 START TEST rpc_trace_cmd_test 00:05:24.891 ************************************ 00:05:24.891 21:01:02 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:24.891 21:01:02 -- rpc/rpc.sh@40 -- # local info 00:05:24.891 21:01:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.891 21:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.891 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.891 21:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.891 21:01:02 -- rpc/rpc.sh@42 -- # info='{ 00:05:24.891 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2158624", 00:05:24.891 "tpoint_group_mask": "0x8", 00:05:24.891 "iscsi_conn": { 00:05:24.891 "mask": "0x2", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "scsi": { 00:05:24.891 "mask": "0x4", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "bdev": { 00:05:24.891 "mask": "0x8", 00:05:24.891 "tpoint_mask": "0xffffffffffffffff" 00:05:24.891 }, 00:05:24.891 "nvmf_rdma": { 00:05:24.891 "mask": "0x10", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "nvmf_tcp": { 00:05:24.891 "mask": "0x20", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "ftl": { 00:05:24.891 "mask": "0x40", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "blobfs": { 00:05:24.891 "mask": "0x80", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "dsa": { 00:05:24.891 "mask": "0x200", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "thread": { 00:05:24.891 "mask": "0x400", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "nvme_pcie": { 00:05:24.891 "mask": "0x800", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "iaa": { 00:05:24.891 "mask": "0x1000", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "nvme_tcp": { 00:05:24.891 "mask": "0x2000", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 }, 00:05:24.891 "bdev_nvme": { 00:05:24.891 "mask": "0x4000", 00:05:24.891 "tpoint_mask": "0x0" 00:05:24.891 } 00:05:24.891 }' 00:05:24.891 21:01:02 -- rpc/rpc.sh@43 -- # jq length 00:05:24.891 21:01:02 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:24.891 21:01:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.150 21:01:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.150 21:01:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.151 21:01:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.151 21:01:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.151 21:01:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.151 21:01:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:25.151 21:01:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:25.151 00:05:25.151 real 0m0.226s 00:05:25.151 user 0m0.189s 00:05:25.151 sys 0m0.028s 00:05:25.151 21:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.151 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.151 ************************************ 00:05:25.151 END TEST rpc_trace_cmd_test 00:05:25.151 ************************************ 00:05:25.151 21:01:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.151 21:01:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.151 21:01:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.151 21:01:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.151 21:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.151 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.151 ************************************ 00:05:25.151 START TEST rpc_daemon_integrity 00:05:25.151 ************************************ 00:05:25.151 21:01:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:25.151 21:01:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.151 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.151 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.151 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.151 21:01:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.151 21:01:03 -- rpc/rpc.sh@13 -- # jq length 00:05:25.151 21:01:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.151 21:01:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.151 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.151 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.151 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.151 21:01:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:25.151 21:01:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.151 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.151 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.151 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.411 21:01:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.411 { 00:05:25.411 "name": "Malloc2", 00:05:25.411 "aliases": [ 00:05:25.411 "55327ac9-3c4a-4e6c-8bf9-8636519b108a" 00:05:25.411 ], 00:05:25.411 "product_name": "Malloc disk", 00:05:25.411 "block_size": 512, 00:05:25.411 "num_blocks": 16384, 00:05:25.411 "uuid": "55327ac9-3c4a-4e6c-8bf9-8636519b108a", 00:05:25.411 "assigned_rate_limits": { 00:05:25.411 "rw_ios_per_sec": 0, 00:05:25.411 "rw_mbytes_per_sec": 0, 00:05:25.411 "r_mbytes_per_sec": 0, 00:05:25.411 "w_mbytes_per_sec": 0 00:05:25.411 }, 00:05:25.411 "claimed": false, 00:05:25.411 "zoned": false, 00:05:25.411 "supported_io_types": { 00:05:25.411 "read": true, 00:05:25.411 "write": true, 00:05:25.411 "unmap": true, 00:05:25.411 "write_zeroes": true, 00:05:25.411 "flush": true, 00:05:25.411 "reset": true, 00:05:25.411 "compare": false, 00:05:25.411 "compare_and_write": false, 00:05:25.411 "abort": true, 00:05:25.411 "nvme_admin": false, 00:05:25.411 "nvme_io": false 00:05:25.411 }, 00:05:25.411 "memory_domains": [ 00:05:25.411 { 00:05:25.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.411 "dma_device_type": 2 00:05:25.411 } 00:05:25.411 ], 00:05:25.411 "driver_specific": {} 00:05:25.411 } 00:05:25.411 ]' 00:05:25.411 21:01:03 -- rpc/rpc.sh@17 -- # jq length 00:05:25.411 21:01:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.411 21:01:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:25.411 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.411 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.411 [2024-06-08 21:01:03.292191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:25.411 [2024-06-08 21:01:03.292221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.411 [2024-06-08 21:01:03.292233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18acf00 00:05:25.411 [2024-06-08 21:01:03.292240] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.411 [2024-06-08 21:01:03.293453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.411 [2024-06-08 21:01:03.293474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.411 Passthru0 00:05:25.411 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.411 21:01:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.411 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.411 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.411 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.411 21:01:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.411 { 00:05:25.411 "name": "Malloc2", 00:05:25.411 "aliases": [ 00:05:25.411 "55327ac9-3c4a-4e6c-8bf9-8636519b108a" 00:05:25.411 ], 00:05:25.411 "product_name": "Malloc disk", 00:05:25.411 "block_size": 512, 00:05:25.411 "num_blocks": 16384, 00:05:25.411 "uuid": "55327ac9-3c4a-4e6c-8bf9-8636519b108a", 00:05:25.411 "assigned_rate_limits": { 00:05:25.411 "rw_ios_per_sec": 0, 00:05:25.411 "rw_mbytes_per_sec": 0, 00:05:25.411 "r_mbytes_per_sec": 0, 00:05:25.411 "w_mbytes_per_sec": 0 00:05:25.411 }, 00:05:25.411 "claimed": true, 00:05:25.411 "claim_type": "exclusive_write", 00:05:25.411 "zoned": false, 00:05:25.411 "supported_io_types": { 00:05:25.411 "read": true, 00:05:25.411 "write": true, 00:05:25.411 "unmap": true, 00:05:25.411 "write_zeroes": true, 00:05:25.411 "flush": true, 00:05:25.411 "reset": true, 00:05:25.411 "compare": false, 00:05:25.411 "compare_and_write": false, 00:05:25.411 "abort": true, 00:05:25.411 "nvme_admin": false, 00:05:25.411 "nvme_io": false 00:05:25.411 }, 00:05:25.411 "memory_domains": [ 00:05:25.411 { 00:05:25.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.411 "dma_device_type": 2 00:05:25.411 } 00:05:25.411 ], 00:05:25.411 "driver_specific": {} 00:05:25.411 }, 00:05:25.411 { 00:05:25.411 "name": "Passthru0", 00:05:25.411 "aliases": [ 00:05:25.411 "d14f1f2f-d65f-50f9-b020-9a01ea49cd8c" 00:05:25.411 ], 00:05:25.411 "product_name": "passthru", 00:05:25.411 "block_size": 512, 00:05:25.411 "num_blocks": 16384, 00:05:25.411 "uuid": "d14f1f2f-d65f-50f9-b020-9a01ea49cd8c", 00:05:25.411 "assigned_rate_limits": { 00:05:25.411 "rw_ios_per_sec": 0, 00:05:25.411 "rw_mbytes_per_sec": 0, 00:05:25.411 "r_mbytes_per_sec": 0, 00:05:25.411 "w_mbytes_per_sec": 0 00:05:25.411 }, 00:05:25.411 "claimed": false, 00:05:25.411 "zoned": false, 00:05:25.411 "supported_io_types": { 00:05:25.411 "read": true, 00:05:25.411 "write": true, 00:05:25.411 "unmap": true, 00:05:25.411 "write_zeroes": true, 00:05:25.411 "flush": true, 00:05:25.411 "reset": true, 00:05:25.411 "compare": false, 00:05:25.411 "compare_and_write": false, 00:05:25.411 "abort": true, 00:05:25.411 "nvme_admin": false, 00:05:25.411 "nvme_io": false 00:05:25.411 }, 00:05:25.411 "memory_domains": [ 00:05:25.411 { 00:05:25.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.411 "dma_device_type": 2 00:05:25.411 } 00:05:25.411 ], 00:05:25.411 "driver_specific": { 00:05:25.411 "passthru": { 00:05:25.411 "name": "Passthru0", 00:05:25.411 "base_bdev_name": "Malloc2" 00:05:25.412 } 00:05:25.412 } 00:05:25.412 } 00:05:25.412 ]' 00:05:25.412 21:01:03 -- rpc/rpc.sh@21 -- # jq length 00:05:25.412 21:01:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.412 21:01:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.412 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.412 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.412 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.412 21:01:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.412 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.412 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.412 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.412 21:01:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.412 21:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.412 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.412 21:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.412 21:01:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.412 21:01:03 -- rpc/rpc.sh@26 -- # jq length 00:05:25.412 21:01:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.412 00:05:25.412 real 0m0.278s 00:05:25.412 user 0m0.182s 00:05:25.412 sys 0m0.036s 00:05:25.412 21:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.412 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.412 ************************************ 00:05:25.412 END TEST rpc_daemon_integrity 00:05:25.412 ************************************ 00:05:25.412 21:01:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.412 21:01:03 -- rpc/rpc.sh@84 -- # killprocess 2158624 00:05:25.412 21:01:03 -- common/autotest_common.sh@926 -- # '[' -z 2158624 ']' 00:05:25.412 21:01:03 -- common/autotest_common.sh@930 -- # kill -0 2158624 00:05:25.412 21:01:03 -- common/autotest_common.sh@931 -- # uname 00:05:25.412 21:01:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.412 21:01:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2158624 00:05:25.672 21:01:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.672 21:01:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.672 21:01:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2158624' 00:05:25.672 killing process with pid 2158624 00:05:25.672 21:01:03 -- common/autotest_common.sh@945 -- # kill 2158624 00:05:25.672 21:01:03 -- common/autotest_common.sh@950 -- # wait 2158624 00:05:25.672 00:05:25.672 real 0m2.253s 00:05:25.672 user 0m2.947s 00:05:25.672 sys 0m0.593s 00:05:25.672 21:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.672 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.672 ************************************ 00:05:25.672 END TEST rpc 00:05:25.672 ************************************ 00:05:25.933 21:01:03 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.933 21:01:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.933 21:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.933 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.933 ************************************ 00:05:25.933 START TEST rpc_client 00:05:25.933 ************************************ 00:05:25.933 21:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.933 * Looking for test storage... 00:05:25.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:25.933 21:01:03 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:25.933 OK 00:05:25.933 21:01:03 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.933 00:05:25.933 real 0m0.121s 00:05:25.933 user 0m0.055s 00:05:25.933 sys 0m0.073s 00:05:25.933 21:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.933 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.933 ************************************ 00:05:25.933 END TEST rpc_client 00:05:25.933 ************************************ 00:05:25.933 21:01:03 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.933 21:01:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.933 21:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.933 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.933 ************************************ 00:05:25.933 START TEST json_config 00:05:25.933 ************************************ 00:05:25.933 21:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.933 21:01:04 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.933 21:01:04 -- nvmf/common.sh@7 -- # uname -s 00:05:25.933 21:01:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.933 21:01:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.933 21:01:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.933 21:01:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.933 21:01:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.933 21:01:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.933 21:01:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.933 21:01:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.933 21:01:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.933 21:01:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.195 21:01:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.195 21:01:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.195 21:01:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.195 21:01:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.195 21:01:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.195 21:01:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.195 21:01:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.195 21:01:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.195 21:01:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.195 21:01:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.195 21:01:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.195 21:01:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.195 21:01:04 -- paths/export.sh@5 -- # export PATH 00:05:26.195 21:01:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.195 21:01:04 -- nvmf/common.sh@46 -- # : 0 00:05:26.195 21:01:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:26.195 21:01:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:26.195 21:01:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:26.195 21:01:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.195 21:01:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.195 21:01:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:26.195 21:01:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:26.195 21:01:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:26.195 21:01:04 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.195 21:01:04 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.195 21:01:04 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:26.195 21:01:04 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.195 21:01:04 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:26.195 21:01:04 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.195 21:01:04 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:26.195 21:01:04 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.195 21:01:04 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:26.195 21:01:04 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:26.195 21:01:04 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.195 21:01:04 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:26.195 INFO: JSON configuration test init 00:05:26.195 21:01:04 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:26.195 21:01:04 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:26.195 21:01:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:26.195 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.195 21:01:04 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:26.195 21:01:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:26.195 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.195 21:01:04 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.195 21:01:04 -- json_config/json_config.sh@98 -- # local app=target 00:05:26.195 21:01:04 -- json_config/json_config.sh@99 -- # shift 00:05:26.195 21:01:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:26.195 21:01:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.195 21:01:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=2159227 00:05:26.195 21:01:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:26.195 Waiting for target to run... 00:05:26.195 21:01:04 -- json_config/json_config.sh@114 -- # waitforlisten 2159227 /var/tmp/spdk_tgt.sock 00:05:26.195 21:01:04 -- common/autotest_common.sh@819 -- # '[' -z 2159227 ']' 00:05:26.195 21:01:04 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.195 21:01:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.195 21:01:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.195 21:01:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.195 21:01:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.195 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.195 [2024-06-08 21:01:04.109522] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:26.195 [2024-06-08 21:01:04.109594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159227 ] 00:05:26.195 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.457 [2024-06-08 21:01:04.424686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.457 [2024-06-08 21:01:04.475110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.457 [2024-06-08 21:01:04.475239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.041 21:01:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.041 21:01:04 -- common/autotest_common.sh@852 -- # return 0 00:05:27.041 21:01:04 -- json_config/json_config.sh@115 -- # echo '' 00:05:27.041 00:05:27.041 21:01:04 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:27.041 21:01:04 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:27.041 21:01:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.041 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.041 21:01:04 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:27.041 21:01:04 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:27.041 21:01:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.041 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.041 21:01:04 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.041 21:01:04 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:27.041 21:01:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:27.387 21:01:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:27.387 21:01:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:27.387 21:01:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.387 21:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.387 21:01:05 -- json_config/json_config.sh@48 -- # local ret=0 00:05:27.387 21:01:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:27.387 21:01:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:27.387 21:01:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:27.387 21:01:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:27.387 21:01:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:27.648 21:01:05 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:27.648 21:01:05 -- json_config/json_config.sh@51 -- # local get_types 00:05:27.648 21:01:05 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:27.648 21:01:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.648 21:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.648 21:01:05 -- json_config/json_config.sh@58 -- # return 0 00:05:27.648 21:01:05 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:27.648 21:01:05 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:27.648 21:01:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.648 21:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.648 21:01:05 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:27.648 21:01:05 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:27.648 21:01:05 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:27.648 21:01:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:27.916 MallocForNvmf0 00:05:27.916 21:01:05 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:27.916 21:01:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:27.916 MallocForNvmf1 00:05:27.916 21:01:05 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:27.916 21:01:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.176 [2024-06-08 21:01:06.115395] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.176 21:01:06 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.176 21:01:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.436 21:01:06 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:28.436 21:01:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:28.436 21:01:06 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:28.436 21:01:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:28.695 21:01:06 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:28.695 21:01:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:28.695 [2024-06-08 21:01:06.773530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.956 21:01:06 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:28.956 21:01:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.956 21:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:28.956 21:01:06 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:28.956 21:01:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.956 21:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:28.956 21:01:06 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:28.956 21:01:06 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.956 21:01:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.956 MallocBdevForConfigChangeCheck 00:05:28.956 21:01:07 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:28.956 21:01:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.956 21:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.216 21:01:07 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:29.216 21:01:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.477 21:01:07 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:29.477 INFO: shutting down applications... 00:05:29.477 21:01:07 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:29.477 21:01:07 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:29.477 21:01:07 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:29.477 21:01:07 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:29.739 Calling clear_iscsi_subsystem 00:05:29.740 Calling clear_nvmf_subsystem 00:05:29.740 Calling clear_nbd_subsystem 00:05:29.740 Calling clear_ublk_subsystem 00:05:29.740 Calling clear_vhost_blk_subsystem 00:05:29.740 Calling clear_vhost_scsi_subsystem 00:05:29.740 Calling clear_scheduler_subsystem 00:05:29.740 Calling clear_bdev_subsystem 00:05:29.740 Calling clear_accel_subsystem 00:05:29.740 Calling clear_vmd_subsystem 00:05:29.740 Calling clear_sock_subsystem 00:05:29.740 Calling clear_iobuf_subsystem 00:05:29.740 21:01:07 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:29.740 21:01:07 -- json_config/json_config.sh@396 -- # count=100 00:05:29.740 21:01:07 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:29.740 21:01:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.740 21:01:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:29.740 21:01:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:30.003 21:01:08 -- json_config/json_config.sh@398 -- # break 00:05:30.003 21:01:08 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:30.004 21:01:08 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:30.004 21:01:08 -- json_config/json_config.sh@120 -- # local app=target 00:05:30.004 21:01:08 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:30.004 21:01:08 -- json_config/json_config.sh@124 -- # [[ -n 2159227 ]] 00:05:30.004 21:01:08 -- json_config/json_config.sh@127 -- # kill -SIGINT 2159227 00:05:30.004 21:01:08 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:30.004 21:01:08 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:30.004 21:01:08 -- json_config/json_config.sh@130 -- # kill -0 2159227 00:05:30.004 21:01:08 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:30.575 21:01:08 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:30.575 21:01:08 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:30.575 21:01:08 -- json_config/json_config.sh@130 -- # kill -0 2159227 00:05:30.575 21:01:08 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:30.575 21:01:08 -- json_config/json_config.sh@132 -- # break 00:05:30.575 21:01:08 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:30.575 21:01:08 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:30.575 SPDK target shutdown done 00:05:30.575 21:01:08 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:30.575 INFO: relaunching applications... 00:05:30.575 21:01:08 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.575 21:01:08 -- json_config/json_config.sh@98 -- # local app=target 00:05:30.575 21:01:08 -- json_config/json_config.sh@99 -- # shift 00:05:30.575 21:01:08 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:30.575 21:01:08 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:30.575 21:01:08 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:30.575 21:01:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:30.575 21:01:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:30.575 21:01:08 -- json_config/json_config.sh@111 -- # app_pid[$app]=2160366 00:05:30.575 21:01:08 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:30.575 Waiting for target to run... 00:05:30.575 21:01:08 -- json_config/json_config.sh@114 -- # waitforlisten 2160366 /var/tmp/spdk_tgt.sock 00:05:30.575 21:01:08 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.575 21:01:08 -- common/autotest_common.sh@819 -- # '[' -z 2160366 ']' 00:05:30.575 21:01:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.575 21:01:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.575 21:01:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.575 21:01:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.575 21:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.575 [2024-06-08 21:01:08.599068] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:30.575 [2024-06-08 21:01:08.599196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160366 ] 00:05:30.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.837 [2024-06-08 21:01:08.912356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.106 [2024-06-08 21:01:08.968068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.106 [2024-06-08 21:01:08.968211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.370 [2024-06-08 21:01:09.460230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.630 [2024-06-08 21:01:09.492607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.202 21:01:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.202 21:01:09 -- common/autotest_common.sh@852 -- # return 0 00:05:32.202 21:01:10 -- json_config/json_config.sh@115 -- # echo '' 00:05:32.202 00:05:32.202 21:01:10 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:32.202 21:01:10 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.202 INFO: Checking if target configuration is the same... 00:05:32.202 21:01:10 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.202 21:01:10 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:32.202 21:01:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.202 + '[' 2 -ne 2 ']' 00:05:32.202 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.202 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.202 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.202 +++ basename /dev/fd/62 00:05:32.202 ++ mktemp /tmp/62.XXX 00:05:32.202 + tmp_file_1=/tmp/62.dGx 00:05:32.202 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.202 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.202 + tmp_file_2=/tmp/spdk_tgt_config.json.AnZ 00:05:32.202 + ret=0 00:05:32.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.463 + diff -u /tmp/62.dGx /tmp/spdk_tgt_config.json.AnZ 00:05:32.463 + echo 'INFO: JSON config files are the same' 00:05:32.463 INFO: JSON config files are the same 00:05:32.463 + rm /tmp/62.dGx /tmp/spdk_tgt_config.json.AnZ 00:05:32.463 + exit 0 00:05:32.463 21:01:10 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:32.463 21:01:10 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.463 INFO: changing configuration and checking if this can be detected... 00:05:32.463 21:01:10 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.463 21:01:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.463 21:01:10 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:32.463 21:01:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.463 21:01:10 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.463 + '[' 2 -ne 2 ']' 00:05:32.463 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.463 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.463 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.463 +++ basename /dev/fd/62 00:05:32.463 ++ mktemp /tmp/62.XXX 00:05:32.463 + tmp_file_1=/tmp/62.n2e 00:05:32.463 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.463 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.463 + tmp_file_2=/tmp/spdk_tgt_config.json.Kbh 00:05:32.463 + ret=0 00:05:32.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.723 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.984 + diff -u /tmp/62.n2e /tmp/spdk_tgt_config.json.Kbh 00:05:32.984 + ret=1 00:05:32.984 + echo '=== Start of file: /tmp/62.n2e ===' 00:05:32.984 + cat /tmp/62.n2e 00:05:32.984 + echo '=== End of file: /tmp/62.n2e ===' 00:05:32.984 + echo '' 00:05:32.984 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Kbh ===' 00:05:32.984 + cat /tmp/spdk_tgt_config.json.Kbh 00:05:32.984 + echo '=== End of file: /tmp/spdk_tgt_config.json.Kbh ===' 00:05:32.984 + echo '' 00:05:32.984 + rm /tmp/62.n2e /tmp/spdk_tgt_config.json.Kbh 00:05:32.984 + exit 1 00:05:32.984 21:01:10 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:32.984 INFO: configuration change detected. 00:05:32.984 21:01:10 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:32.984 21:01:10 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:32.984 21:01:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.984 21:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.984 21:01:10 -- json_config/json_config.sh@360 -- # local ret=0 00:05:32.984 21:01:10 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:32.984 21:01:10 -- json_config/json_config.sh@370 -- # [[ -n 2160366 ]] 00:05:32.984 21:01:10 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:32.984 21:01:10 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.984 21:01:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.984 21:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.984 21:01:10 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:32.984 21:01:10 -- json_config/json_config.sh@246 -- # uname -s 00:05:32.984 21:01:10 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:32.984 21:01:10 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:32.984 21:01:10 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:32.984 21:01:10 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.984 21:01:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.984 21:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.984 21:01:10 -- json_config/json_config.sh@376 -- # killprocess 2160366 00:05:32.984 21:01:10 -- common/autotest_common.sh@926 -- # '[' -z 2160366 ']' 00:05:32.984 21:01:10 -- common/autotest_common.sh@930 -- # kill -0 2160366 00:05:32.984 21:01:10 -- common/autotest_common.sh@931 -- # uname 00:05:32.984 21:01:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.984 21:01:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2160366 00:05:32.984 21:01:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.984 21:01:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.984 21:01:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2160366' 00:05:32.984 killing process with pid 2160366 00:05:32.984 21:01:10 -- common/autotest_common.sh@945 -- # kill 2160366 00:05:32.984 21:01:10 -- common/autotest_common.sh@950 -- # wait 2160366 00:05:33.245 21:01:11 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.245 21:01:11 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:33.245 21:01:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.245 21:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.245 21:01:11 -- json_config/json_config.sh@381 -- # return 0 00:05:33.245 21:01:11 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:33.245 INFO: Success 00:05:33.245 00:05:33.245 real 0m7.331s 00:05:33.245 user 0m8.791s 00:05:33.245 sys 0m1.785s 00:05:33.245 21:01:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.245 21:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.245 ************************************ 00:05:33.245 END TEST json_config 00:05:33.245 ************************************ 00:05:33.245 21:01:11 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.245 21:01:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.245 21:01:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.245 21:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.245 ************************************ 00:05:33.245 START TEST json_config_extra_key 00:05:33.245 ************************************ 00:05:33.245 21:01:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.507 21:01:11 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.507 21:01:11 -- nvmf/common.sh@7 -- # uname -s 00:05:33.507 21:01:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.507 21:01:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.507 21:01:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.507 21:01:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.507 21:01:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.507 21:01:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.507 21:01:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.507 21:01:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.507 21:01:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.507 21:01:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.507 21:01:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.507 21:01:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.507 21:01:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.507 21:01:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.507 21:01:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.507 21:01:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.507 21:01:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.507 21:01:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.507 21:01:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.507 21:01:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.507 21:01:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.507 21:01:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.507 21:01:11 -- paths/export.sh@5 -- # export PATH 00:05:33.507 21:01:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.507 21:01:11 -- nvmf/common.sh@46 -- # : 0 00:05:33.507 21:01:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:33.507 21:01:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:33.507 21:01:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:33.507 21:01:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.507 21:01:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.507 21:01:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:33.507 21:01:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:33.507 21:01:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:33.508 INFO: launching applications... 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2161005 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:33.508 Waiting for target to run... 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2161005 /var/tmp/spdk_tgt.sock 00:05:33.508 21:01:11 -- common/autotest_common.sh@819 -- # '[' -z 2161005 ']' 00:05:33.508 21:01:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.508 21:01:11 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.508 21:01:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.508 21:01:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.508 21:01:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.508 21:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.508 [2024-06-08 21:01:11.466975] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:33.508 [2024-06-08 21:01:11.467035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161005 ] 00:05:33.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.767 [2024-06-08 21:01:11.705455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.767 [2024-06-08 21:01:11.753409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.767 [2024-06-08 21:01:11.753536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.336 21:01:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.336 21:01:12 -- common/autotest_common.sh@852 -- # return 0 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:34.336 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:34.336 INFO: shutting down applications... 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2161005 ]] 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2161005 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2161005 00:05:34.336 21:01:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2161005 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:34.908 SPDK target shutdown done 00:05:34.908 21:01:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:34.908 Success 00:05:34.908 00:05:34.908 real 0m1.404s 00:05:34.908 user 0m1.076s 00:05:34.908 sys 0m0.320s 00:05:34.908 21:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.908 21:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.908 ************************************ 00:05:34.908 END TEST json_config_extra_key 00:05:34.908 ************************************ 00:05:34.908 21:01:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.908 21:01:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.908 21:01:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.908 21:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.908 ************************************ 00:05:34.908 START TEST alias_rpc 00:05:34.908 ************************************ 00:05:34.908 21:01:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.908 * Looking for test storage... 00:05:34.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:34.908 21:01:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.908 21:01:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2161273 00:05:34.908 21:01:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2161273 00:05:34.908 21:01:12 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.908 21:01:12 -- common/autotest_common.sh@819 -- # '[' -z 2161273 ']' 00:05:34.908 21:01:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.908 21:01:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.908 21:01:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.908 21:01:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.908 21:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.908 [2024-06-08 21:01:12.918587] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:34.908 [2024-06-08 21:01:12.918669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161273 ] 00:05:34.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.908 [2024-06-08 21:01:12.982406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.169 [2024-06-08 21:01:13.055319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.169 [2024-06-08 21:01:13.055468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.750 21:01:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.750 21:01:13 -- common/autotest_common.sh@852 -- # return 0 00:05:35.750 21:01:13 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:35.750 21:01:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2161273 00:05:35.750 21:01:13 -- common/autotest_common.sh@926 -- # '[' -z 2161273 ']' 00:05:35.750 21:01:13 -- common/autotest_common.sh@930 -- # kill -0 2161273 00:05:36.010 21:01:13 -- common/autotest_common.sh@931 -- # uname 00:05:36.010 21:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.010 21:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2161273 00:05:36.010 21:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.010 21:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.010 21:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2161273' 00:05:36.010 killing process with pid 2161273 00:05:36.010 21:01:13 -- common/autotest_common.sh@945 -- # kill 2161273 00:05:36.010 21:01:13 -- common/autotest_common.sh@950 -- # wait 2161273 00:05:36.010 00:05:36.010 real 0m1.331s 00:05:36.010 user 0m1.448s 00:05:36.010 sys 0m0.353s 00:05:36.010 21:01:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.010 21:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:36.010 ************************************ 00:05:36.010 END TEST alias_rpc 00:05:36.010 ************************************ 00:05:36.271 21:01:14 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:36.271 21:01:14 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:36.271 21:01:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.271 21:01:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.271 21:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:36.271 ************************************ 00:05:36.271 START TEST spdkcli_tcp 00:05:36.271 ************************************ 00:05:36.271 21:01:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:36.271 * Looking for test storage... 00:05:36.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:36.271 21:01:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:36.271 21:01:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:36.271 21:01:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.271 21:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2161605 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 2161605 00:05:36.271 21:01:14 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:36.271 21:01:14 -- common/autotest_common.sh@819 -- # '[' -z 2161605 ']' 00:05:36.271 21:01:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.271 21:01:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.271 21:01:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.271 21:01:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.271 21:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:36.271 [2024-06-08 21:01:14.293019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:36.271 [2024-06-08 21:01:14.293081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161605 ] 00:05:36.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.271 [2024-06-08 21:01:14.355691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.531 [2024-06-08 21:01:14.422691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.531 [2024-06-08 21:01:14.422936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.531 [2024-06-08 21:01:14.422937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.101 21:01:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.101 21:01:15 -- common/autotest_common.sh@852 -- # return 0 00:05:37.101 21:01:15 -- spdkcli/tcp.sh@31 -- # socat_pid=2161933 00:05:37.101 21:01:15 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.101 21:01:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.101 [ 00:05:37.101 "bdev_malloc_delete", 00:05:37.101 "bdev_malloc_create", 00:05:37.101 "bdev_null_resize", 00:05:37.101 "bdev_null_delete", 00:05:37.101 "bdev_null_create", 00:05:37.101 "bdev_nvme_cuse_unregister", 00:05:37.101 "bdev_nvme_cuse_register", 00:05:37.101 "bdev_opal_new_user", 00:05:37.101 "bdev_opal_set_lock_state", 00:05:37.101 "bdev_opal_delete", 00:05:37.101 "bdev_opal_get_info", 00:05:37.101 "bdev_opal_create", 00:05:37.101 "bdev_nvme_opal_revert", 00:05:37.101 "bdev_nvme_opal_init", 00:05:37.101 "bdev_nvme_send_cmd", 00:05:37.101 "bdev_nvme_get_path_iostat", 00:05:37.101 "bdev_nvme_get_mdns_discovery_info", 00:05:37.101 "bdev_nvme_stop_mdns_discovery", 00:05:37.101 "bdev_nvme_start_mdns_discovery", 00:05:37.101 "bdev_nvme_set_multipath_policy", 00:05:37.101 "bdev_nvme_set_preferred_path", 00:05:37.101 "bdev_nvme_get_io_paths", 00:05:37.101 "bdev_nvme_remove_error_injection", 00:05:37.101 "bdev_nvme_add_error_injection", 00:05:37.101 "bdev_nvme_get_discovery_info", 00:05:37.101 "bdev_nvme_stop_discovery", 00:05:37.101 "bdev_nvme_start_discovery", 00:05:37.101 "bdev_nvme_get_controller_health_info", 00:05:37.101 "bdev_nvme_disable_controller", 00:05:37.101 "bdev_nvme_enable_controller", 00:05:37.101 "bdev_nvme_reset_controller", 00:05:37.101 "bdev_nvme_get_transport_statistics", 00:05:37.101 "bdev_nvme_apply_firmware", 00:05:37.101 "bdev_nvme_detach_controller", 00:05:37.101 "bdev_nvme_get_controllers", 00:05:37.101 "bdev_nvme_attach_controller", 00:05:37.101 "bdev_nvme_set_hotplug", 00:05:37.101 "bdev_nvme_set_options", 00:05:37.102 "bdev_passthru_delete", 00:05:37.102 "bdev_passthru_create", 00:05:37.102 "bdev_lvol_grow_lvstore", 00:05:37.102 "bdev_lvol_get_lvols", 00:05:37.102 "bdev_lvol_get_lvstores", 00:05:37.102 "bdev_lvol_delete", 00:05:37.102 "bdev_lvol_set_read_only", 00:05:37.102 "bdev_lvol_resize", 00:05:37.102 "bdev_lvol_decouple_parent", 00:05:37.102 "bdev_lvol_inflate", 00:05:37.102 "bdev_lvol_rename", 00:05:37.102 "bdev_lvol_clone_bdev", 00:05:37.102 "bdev_lvol_clone", 00:05:37.102 "bdev_lvol_snapshot", 00:05:37.102 "bdev_lvol_create", 00:05:37.102 "bdev_lvol_delete_lvstore", 00:05:37.102 "bdev_lvol_rename_lvstore", 00:05:37.102 "bdev_lvol_create_lvstore", 00:05:37.102 "bdev_raid_set_options", 00:05:37.102 "bdev_raid_remove_base_bdev", 00:05:37.102 "bdev_raid_add_base_bdev", 00:05:37.102 "bdev_raid_delete", 00:05:37.102 "bdev_raid_create", 00:05:37.102 "bdev_raid_get_bdevs", 00:05:37.102 "bdev_error_inject_error", 00:05:37.102 "bdev_error_delete", 00:05:37.102 "bdev_error_create", 00:05:37.102 "bdev_split_delete", 00:05:37.102 "bdev_split_create", 00:05:37.102 "bdev_delay_delete", 00:05:37.102 "bdev_delay_create", 00:05:37.102 "bdev_delay_update_latency", 00:05:37.102 "bdev_zone_block_delete", 00:05:37.102 "bdev_zone_block_create", 00:05:37.102 "blobfs_create", 00:05:37.102 "blobfs_detect", 00:05:37.102 "blobfs_set_cache_size", 00:05:37.102 "bdev_aio_delete", 00:05:37.102 "bdev_aio_rescan", 00:05:37.102 "bdev_aio_create", 00:05:37.102 "bdev_ftl_set_property", 00:05:37.102 "bdev_ftl_get_properties", 00:05:37.102 "bdev_ftl_get_stats", 00:05:37.102 "bdev_ftl_unmap", 00:05:37.102 "bdev_ftl_unload", 00:05:37.102 "bdev_ftl_delete", 00:05:37.102 "bdev_ftl_load", 00:05:37.102 "bdev_ftl_create", 00:05:37.102 "bdev_virtio_attach_controller", 00:05:37.102 "bdev_virtio_scsi_get_devices", 00:05:37.102 "bdev_virtio_detach_controller", 00:05:37.102 "bdev_virtio_blk_set_hotplug", 00:05:37.102 "bdev_iscsi_delete", 00:05:37.102 "bdev_iscsi_create", 00:05:37.102 "bdev_iscsi_set_options", 00:05:37.102 "accel_error_inject_error", 00:05:37.102 "ioat_scan_accel_module", 00:05:37.102 "dsa_scan_accel_module", 00:05:37.102 "iaa_scan_accel_module", 00:05:37.102 "iscsi_set_options", 00:05:37.102 "iscsi_get_auth_groups", 00:05:37.102 "iscsi_auth_group_remove_secret", 00:05:37.102 "iscsi_auth_group_add_secret", 00:05:37.102 "iscsi_delete_auth_group", 00:05:37.102 "iscsi_create_auth_group", 00:05:37.102 "iscsi_set_discovery_auth", 00:05:37.102 "iscsi_get_options", 00:05:37.102 "iscsi_target_node_request_logout", 00:05:37.102 "iscsi_target_node_set_redirect", 00:05:37.102 "iscsi_target_node_set_auth", 00:05:37.102 "iscsi_target_node_add_lun", 00:05:37.102 "iscsi_get_connections", 00:05:37.102 "iscsi_portal_group_set_auth", 00:05:37.102 "iscsi_start_portal_group", 00:05:37.102 "iscsi_delete_portal_group", 00:05:37.102 "iscsi_create_portal_group", 00:05:37.102 "iscsi_get_portal_groups", 00:05:37.102 "iscsi_delete_target_node", 00:05:37.102 "iscsi_target_node_remove_pg_ig_maps", 00:05:37.102 "iscsi_target_node_add_pg_ig_maps", 00:05:37.102 "iscsi_create_target_node", 00:05:37.102 "iscsi_get_target_nodes", 00:05:37.102 "iscsi_delete_initiator_group", 00:05:37.102 "iscsi_initiator_group_remove_initiators", 00:05:37.102 "iscsi_initiator_group_add_initiators", 00:05:37.102 "iscsi_create_initiator_group", 00:05:37.102 "iscsi_get_initiator_groups", 00:05:37.102 "nvmf_set_crdt", 00:05:37.102 "nvmf_set_config", 00:05:37.102 "nvmf_set_max_subsystems", 00:05:37.102 "nvmf_subsystem_get_listeners", 00:05:37.102 "nvmf_subsystem_get_qpairs", 00:05:37.102 "nvmf_subsystem_get_controllers", 00:05:37.102 "nvmf_get_stats", 00:05:37.102 "nvmf_get_transports", 00:05:37.102 "nvmf_create_transport", 00:05:37.102 "nvmf_get_targets", 00:05:37.102 "nvmf_delete_target", 00:05:37.102 "nvmf_create_target", 00:05:37.102 "nvmf_subsystem_allow_any_host", 00:05:37.102 "nvmf_subsystem_remove_host", 00:05:37.102 "nvmf_subsystem_add_host", 00:05:37.102 "nvmf_subsystem_remove_ns", 00:05:37.102 "nvmf_subsystem_add_ns", 00:05:37.102 "nvmf_subsystem_listener_set_ana_state", 00:05:37.102 "nvmf_discovery_get_referrals", 00:05:37.102 "nvmf_discovery_remove_referral", 00:05:37.102 "nvmf_discovery_add_referral", 00:05:37.102 "nvmf_subsystem_remove_listener", 00:05:37.102 "nvmf_subsystem_add_listener", 00:05:37.102 "nvmf_delete_subsystem", 00:05:37.102 "nvmf_create_subsystem", 00:05:37.102 "nvmf_get_subsystems", 00:05:37.102 "env_dpdk_get_mem_stats", 00:05:37.102 "nbd_get_disks", 00:05:37.102 "nbd_stop_disk", 00:05:37.102 "nbd_start_disk", 00:05:37.102 "ublk_recover_disk", 00:05:37.102 "ublk_get_disks", 00:05:37.102 "ublk_stop_disk", 00:05:37.102 "ublk_start_disk", 00:05:37.102 "ublk_destroy_target", 00:05:37.102 "ublk_create_target", 00:05:37.102 "virtio_blk_create_transport", 00:05:37.102 "virtio_blk_get_transports", 00:05:37.102 "vhost_controller_set_coalescing", 00:05:37.102 "vhost_get_controllers", 00:05:37.102 "vhost_delete_controller", 00:05:37.102 "vhost_create_blk_controller", 00:05:37.102 "vhost_scsi_controller_remove_target", 00:05:37.102 "vhost_scsi_controller_add_target", 00:05:37.102 "vhost_start_scsi_controller", 00:05:37.102 "vhost_create_scsi_controller", 00:05:37.102 "thread_set_cpumask", 00:05:37.102 "framework_get_scheduler", 00:05:37.102 "framework_set_scheduler", 00:05:37.102 "framework_get_reactors", 00:05:37.102 "thread_get_io_channels", 00:05:37.102 "thread_get_pollers", 00:05:37.102 "thread_get_stats", 00:05:37.102 "framework_monitor_context_switch", 00:05:37.102 "spdk_kill_instance", 00:05:37.102 "log_enable_timestamps", 00:05:37.102 "log_get_flags", 00:05:37.102 "log_clear_flag", 00:05:37.102 "log_set_flag", 00:05:37.102 "log_get_level", 00:05:37.102 "log_set_level", 00:05:37.102 "log_get_print_level", 00:05:37.102 "log_set_print_level", 00:05:37.102 "framework_enable_cpumask_locks", 00:05:37.102 "framework_disable_cpumask_locks", 00:05:37.102 "framework_wait_init", 00:05:37.102 "framework_start_init", 00:05:37.102 "scsi_get_devices", 00:05:37.102 "bdev_get_histogram", 00:05:37.102 "bdev_enable_histogram", 00:05:37.102 "bdev_set_qos_limit", 00:05:37.102 "bdev_set_qd_sampling_period", 00:05:37.102 "bdev_get_bdevs", 00:05:37.102 "bdev_reset_iostat", 00:05:37.102 "bdev_get_iostat", 00:05:37.102 "bdev_examine", 00:05:37.102 "bdev_wait_for_examine", 00:05:37.102 "bdev_set_options", 00:05:37.102 "notify_get_notifications", 00:05:37.102 "notify_get_types", 00:05:37.102 "accel_get_stats", 00:05:37.102 "accel_set_options", 00:05:37.102 "accel_set_driver", 00:05:37.102 "accel_crypto_key_destroy", 00:05:37.102 "accel_crypto_keys_get", 00:05:37.102 "accel_crypto_key_create", 00:05:37.102 "accel_assign_opc", 00:05:37.102 "accel_get_module_info", 00:05:37.102 "accel_get_opc_assignments", 00:05:37.102 "vmd_rescan", 00:05:37.102 "vmd_remove_device", 00:05:37.102 "vmd_enable", 00:05:37.102 "sock_set_default_impl", 00:05:37.102 "sock_impl_set_options", 00:05:37.102 "sock_impl_get_options", 00:05:37.102 "iobuf_get_stats", 00:05:37.102 "iobuf_set_options", 00:05:37.102 "framework_get_pci_devices", 00:05:37.102 "framework_get_config", 00:05:37.102 "framework_get_subsystems", 00:05:37.102 "trace_get_info", 00:05:37.102 "trace_get_tpoint_group_mask", 00:05:37.102 "trace_disable_tpoint_group", 00:05:37.102 "trace_enable_tpoint_group", 00:05:37.102 "trace_clear_tpoint_mask", 00:05:37.102 "trace_set_tpoint_mask", 00:05:37.102 "spdk_get_version", 00:05:37.102 "rpc_get_methods" 00:05:37.102 ] 00:05:37.363 21:01:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:37.363 21:01:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:37.363 21:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:37.363 21:01:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:37.363 21:01:15 -- spdkcli/tcp.sh@38 -- # killprocess 2161605 00:05:37.363 21:01:15 -- common/autotest_common.sh@926 -- # '[' -z 2161605 ']' 00:05:37.363 21:01:15 -- common/autotest_common.sh@930 -- # kill -0 2161605 00:05:37.363 21:01:15 -- common/autotest_common.sh@931 -- # uname 00:05:37.363 21:01:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.363 21:01:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2161605 00:05:37.363 21:01:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.363 21:01:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.363 21:01:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2161605' 00:05:37.363 killing process with pid 2161605 00:05:37.363 21:01:15 -- common/autotest_common.sh@945 -- # kill 2161605 00:05:37.363 21:01:15 -- common/autotest_common.sh@950 -- # wait 2161605 00:05:37.624 00:05:37.624 real 0m1.351s 00:05:37.624 user 0m2.497s 00:05:37.624 sys 0m0.393s 00:05:37.624 21:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.624 21:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:37.624 ************************************ 00:05:37.624 END TEST spdkcli_tcp 00:05:37.624 ************************************ 00:05:37.624 21:01:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.624 21:01:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.624 21:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.624 21:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:37.624 ************************************ 00:05:37.624 START TEST dpdk_mem_utility 00:05:37.624 ************************************ 00:05:37.624 21:01:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.624 * Looking for test storage... 00:05:37.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:37.624 21:01:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.624 21:01:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2162006 00:05:37.624 21:01:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2162006 00:05:37.624 21:01:15 -- common/autotest_common.sh@819 -- # '[' -z 2162006 ']' 00:05:37.624 21:01:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.624 21:01:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.624 21:01:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.624 21:01:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.624 21:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:37.624 21:01:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.624 [2024-06-08 21:01:15.683870] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:37.624 [2024-06-08 21:01:15.683928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162006 ] 00:05:37.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.885 [2024-06-08 21:01:15.742671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.885 [2024-06-08 21:01:15.805378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.885 [2024-06-08 21:01:15.805510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.454 21:01:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.454 21:01:16 -- common/autotest_common.sh@852 -- # return 0 00:05:38.454 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.454 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.454 21:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.454 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.454 { 00:05:38.454 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.454 } 00:05:38.454 21:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.454 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.454 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:38.454 1 heaps totaling size 814.000000 MiB 00:05:38.454 size: 814.000000 MiB heap id: 0 00:05:38.454 end heaps---------- 00:05:38.454 8 mempools totaling size 598.116089 MiB 00:05:38.454 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.454 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.454 size: 84.521057 MiB name: bdev_io_2162006 00:05:38.454 size: 51.011292 MiB name: evtpool_2162006 00:05:38.454 size: 50.003479 MiB name: msgpool_2162006 00:05:38.454 size: 21.763794 MiB name: PDU_Pool 00:05:38.454 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.454 size: 0.026123 MiB name: Session_Pool 00:05:38.454 end mempools------- 00:05:38.454 6 memzones totaling size 4.142822 MiB 00:05:38.454 size: 1.000366 MiB name: RG_ring_0_2162006 00:05:38.454 size: 1.000366 MiB name: RG_ring_1_2162006 00:05:38.454 size: 1.000366 MiB name: RG_ring_4_2162006 00:05:38.454 size: 1.000366 MiB name: RG_ring_5_2162006 00:05:38.454 size: 0.125366 MiB name: RG_ring_2_2162006 00:05:38.454 size: 0.015991 MiB name: RG_ring_3_2162006 00:05:38.454 end memzones------- 00:05:38.454 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.454 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:38.454 list of free elements. size: 12.519348 MiB 00:05:38.454 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:38.454 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:38.454 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:38.454 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:38.454 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:38.454 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:38.454 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:38.454 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:38.454 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:38.454 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:38.454 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:38.454 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:38.454 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:38.454 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:38.454 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:38.454 list of standard malloc elements. size: 199.218079 MiB 00:05:38.454 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:38.454 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:38.454 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:38.454 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:38.454 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:38.454 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:38.454 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:38.454 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:38.454 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:38.454 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:38.454 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:38.454 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:38.454 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:38.454 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:38.455 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:38.455 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:38.455 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:38.455 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:38.455 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:38.455 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:38.455 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:38.455 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:38.455 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:38.455 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:38.455 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:38.455 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:38.455 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:38.455 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:38.455 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:38.455 list of memzone associated elements. size: 602.262573 MiB 00:05:38.455 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:38.455 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.455 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:38.455 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.455 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:38.455 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2162006_0 00:05:38.455 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:38.455 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2162006_0 00:05:38.455 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:38.455 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2162006_0 00:05:38.455 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:38.455 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.455 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:38.455 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.455 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:38.455 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2162006 00:05:38.455 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:38.455 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2162006 00:05:38.455 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:38.455 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2162006 00:05:38.455 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:38.455 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.455 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:38.455 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.455 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:38.455 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.455 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:38.455 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.455 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:38.455 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2162006 00:05:38.455 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:38.455 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2162006 00:05:38.455 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:38.455 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2162006 00:05:38.455 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:38.455 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2162006 00:05:38.455 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:38.455 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2162006 00:05:38.455 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:38.455 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.455 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:38.455 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.455 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:38.455 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.455 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:38.455 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2162006 00:05:38.455 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:38.455 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.455 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:38.455 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.455 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:38.455 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2162006 00:05:38.455 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:38.455 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.455 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:38.455 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2162006 00:05:38.455 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:38.455 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2162006 00:05:38.455 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:38.455 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.455 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.455 21:01:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2162006 00:05:38.455 21:01:16 -- common/autotest_common.sh@926 -- # '[' -z 2162006 ']' 00:05:38.455 21:01:16 -- common/autotest_common.sh@930 -- # kill -0 2162006 00:05:38.455 21:01:16 -- common/autotest_common.sh@931 -- # uname 00:05:38.455 21:01:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.455 21:01:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2162006 00:05:38.715 21:01:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.715 21:01:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.715 21:01:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2162006' 00:05:38.715 killing process with pid 2162006 00:05:38.715 21:01:16 -- common/autotest_common.sh@945 -- # kill 2162006 00:05:38.715 21:01:16 -- common/autotest_common.sh@950 -- # wait 2162006 00:05:38.715 00:05:38.715 real 0m1.233s 00:05:38.715 user 0m1.306s 00:05:38.715 sys 0m0.329s 00:05:38.715 21:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.715 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.715 ************************************ 00:05:38.715 END TEST dpdk_mem_utility 00:05:38.715 ************************************ 00:05:38.975 21:01:16 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:38.975 21:01:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.975 21:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.975 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 ************************************ 00:05:38.975 START TEST event 00:05:38.975 ************************************ 00:05:38.975 21:01:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:38.975 * Looking for test storage... 00:05:38.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:38.975 21:01:16 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:38.975 21:01:16 -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.975 21:01:16 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.975 21:01:16 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:38.975 21:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.975 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 ************************************ 00:05:38.975 START TEST event_perf 00:05:38.975 ************************************ 00:05:38.975 21:01:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.975 Running I/O for 1 seconds...[2024-06-08 21:01:16.943625] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:38.975 [2024-06-08 21:01:16.943734] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162391 ] 00:05:38.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.975 [2024-06-08 21:01:17.010378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.235 [2024-06-08 21:01:17.082608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.235 [2024-06-08 21:01:17.082721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.235 [2024-06-08 21:01:17.082879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.235 [2024-06-08 21:01:17.082880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.178 Running I/O for 1 seconds... 00:05:40.178 lcore 0: 167086 00:05:40.178 lcore 1: 167082 00:05:40.178 lcore 2: 167083 00:05:40.178 lcore 3: 167086 00:05:40.178 done. 00:05:40.178 00:05:40.178 real 0m1.213s 00:05:40.178 user 0m4.134s 00:05:40.178 sys 0m0.076s 00:05:40.178 21:01:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.178 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.178 ************************************ 00:05:40.178 END TEST event_perf 00:05:40.178 ************************************ 00:05:40.178 21:01:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.178 21:01:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:40.178 21:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.178 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.178 ************************************ 00:05:40.178 START TEST event_reactor 00:05:40.178 ************************************ 00:05:40.178 21:01:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.178 [2024-06-08 21:01:18.200817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:40.178 [2024-06-08 21:01:18.200924] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162631 ] 00:05:40.178 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.178 [2024-06-08 21:01:18.264674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.440 [2024-06-08 21:01:18.328979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.382 test_start 00:05:41.382 oneshot 00:05:41.382 tick 100 00:05:41.382 tick 100 00:05:41.382 tick 250 00:05:41.382 tick 100 00:05:41.382 tick 100 00:05:41.382 tick 100 00:05:41.382 tick 250 00:05:41.382 tick 500 00:05:41.382 tick 100 00:05:41.382 tick 100 00:05:41.382 tick 250 00:05:41.382 tick 100 00:05:41.382 tick 100 00:05:41.382 test_end 00:05:41.382 00:05:41.382 real 0m1.200s 00:05:41.382 user 0m1.130s 00:05:41.382 sys 0m0.066s 00:05:41.382 21:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.382 21:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.382 ************************************ 00:05:41.382 END TEST event_reactor 00:05:41.382 ************************************ 00:05:41.382 21:01:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.382 21:01:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:41.382 21:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.382 21:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.382 ************************************ 00:05:41.382 START TEST event_reactor_perf 00:05:41.382 ************************************ 00:05:41.382 21:01:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.382 [2024-06-08 21:01:19.445071] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:41.382 [2024-06-08 21:01:19.445168] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162794 ] 00:05:41.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.649 [2024-06-08 21:01:19.509050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.649 [2024-06-08 21:01:19.570741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.612 test_start 00:05:42.612 test_end 00:05:42.612 Performance: 366387 events per second 00:05:42.612 00:05:42.612 real 0m1.200s 00:05:42.612 user 0m1.128s 00:05:42.612 sys 0m0.068s 00:05:42.612 21:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.612 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:42.612 ************************************ 00:05:42.612 END TEST event_reactor_perf 00:05:42.612 ************************************ 00:05:42.612 21:01:20 -- event/event.sh@49 -- # uname -s 00:05:42.612 21:01:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.612 21:01:20 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.612 21:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.612 21:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.612 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:42.612 ************************************ 00:05:42.612 START TEST event_scheduler 00:05:42.612 ************************************ 00:05:42.612 21:01:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.873 * Looking for test storage... 00:05:42.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:42.873 21:01:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.873 21:01:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2163167 00:05:42.873 21:01:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.873 21:01:20 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.873 21:01:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 2163167 00:05:42.873 21:01:20 -- common/autotest_common.sh@819 -- # '[' -z 2163167 ']' 00:05:42.873 21:01:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.873 21:01:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.873 21:01:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.873 21:01:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.873 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:42.873 [2024-06-08 21:01:20.818050] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:42.873 [2024-06-08 21:01:20.818144] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163167 ] 00:05:42.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.873 [2024-06-08 21:01:20.875261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.873 [2024-06-08 21:01:20.938041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.873 [2024-06-08 21:01:20.938194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.873 [2024-06-08 21:01:20.938348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.873 [2024-06-08 21:01:20.938349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.815 21:01:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.815 21:01:21 -- common/autotest_common.sh@852 -- # return 0 00:05:43.815 21:01:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.815 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.815 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.815 POWER: Env isn't set yet! 00:05:43.815 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:43.815 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.815 POWER: Attempting to initialise PSTAT power management... 00:05:43.815 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:43.815 POWER: Initialized successfully for lcore 0 power management 00:05:43.815 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:43.815 POWER: Initialized successfully for lcore 1 power management 00:05:43.815 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:43.815 POWER: Initialized successfully for lcore 2 power management 00:05:43.815 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:43.815 POWER: Initialized successfully for lcore 3 power management 00:05:43.815 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.815 21:01:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.815 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.815 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.815 [2024-06-08 21:01:21.707194] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.815 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.815 21:01:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.815 21:01:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.815 21:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.815 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.815 ************************************ 00:05:43.815 START TEST scheduler_create_thread 00:05:43.815 ************************************ 00:05:43.815 21:01:21 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 2 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 3 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 4 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 5 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 6 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 7 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 8 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.816 9 00:05:43.816 21:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.816 21:01:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.816 21:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.816 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:45.201 10 00:05:45.201 21:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.201 21:01:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.201 21:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.201 21:01:23 -- common/autotest_common.sh@10 -- # set +x 00:05:46.584 21:01:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.584 21:01:24 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.584 21:01:24 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.584 21:01:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.584 21:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:47.154 21:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.154 21:01:25 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.154 21:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.154 21:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 21:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.095 21:01:25 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.095 21:01:25 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.095 21:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.095 21:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.667 21:01:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.667 00:05:48.667 real 0m4.797s 00:05:48.667 user 0m0.022s 00:05:48.667 sys 0m0.009s 00:05:48.667 21:01:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.667 21:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.667 ************************************ 00:05:48.667 END TEST scheduler_create_thread 00:05:48.667 ************************************ 00:05:48.667 21:01:26 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.667 21:01:26 -- scheduler/scheduler.sh@46 -- # killprocess 2163167 00:05:48.667 21:01:26 -- common/autotest_common.sh@926 -- # '[' -z 2163167 ']' 00:05:48.667 21:01:26 -- common/autotest_common.sh@930 -- # kill -0 2163167 00:05:48.667 21:01:26 -- common/autotest_common.sh@931 -- # uname 00:05:48.667 21:01:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.667 21:01:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2163167 00:05:48.667 21:01:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:48.667 21:01:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:48.667 21:01:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2163167' 00:05:48.667 killing process with pid 2163167 00:05:48.667 21:01:26 -- common/autotest_common.sh@945 -- # kill 2163167 00:05:48.667 21:01:26 -- common/autotest_common.sh@950 -- # wait 2163167 00:05:48.928 [2024-06-08 21:01:26.793043] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.928 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:48.928 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:48.928 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:48.928 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:48.928 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:48.928 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:48.928 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:48.928 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:48.928 00:05:48.928 real 0m6.282s 00:05:48.928 user 0m14.115s 00:05:48.928 sys 0m0.324s 00:05:48.928 21:01:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.928 21:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.928 ************************************ 00:05:48.928 END TEST event_scheduler 00:05:48.928 ************************************ 00:05:48.928 21:01:26 -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.928 21:01:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.928 21:01:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.928 21:01:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.928 21:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.928 ************************************ 00:05:48.928 START TEST app_repeat 00:05:48.928 ************************************ 00:05:48.928 21:01:27 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:48.928 21:01:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.928 21:01:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.928 21:01:27 -- event/event.sh@13 -- # local nbd_list 00:05:48.928 21:01:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.928 21:01:27 -- event/event.sh@14 -- # local bdev_list 00:05:48.928 21:01:27 -- event/event.sh@15 -- # local repeat_times=4 00:05:48.928 21:01:27 -- event/event.sh@17 -- # modprobe nbd 00:05:48.928 21:01:27 -- event/event.sh@19 -- # repeat_pid=2164568 00:05:48.928 21:01:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.928 21:01:27 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.928 21:01:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2164568' 00:05:48.928 Process app_repeat pid: 2164568 00:05:48.928 21:01:27 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.928 21:01:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.928 spdk_app_start Round 0 00:05:48.928 21:01:27 -- event/event.sh@25 -- # waitforlisten 2164568 /var/tmp/spdk-nbd.sock 00:05:48.928 21:01:27 -- common/autotest_common.sh@819 -- # '[' -z 2164568 ']' 00:05:48.928 21:01:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.928 21:01:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.928 21:01:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.928 21:01:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.928 21:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:49.189 [2024-06-08 21:01:27.041720] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:49.189 [2024-06-08 21:01:27.041792] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164568 ] 00:05:49.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.189 [2024-06-08 21:01:27.104977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.189 [2024-06-08 21:01:27.173102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.189 [2024-06-08 21:01:27.173104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.761 21:01:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.761 21:01:27 -- common/autotest_common.sh@852 -- # return 0 00:05:49.761 21:01:27 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.020 Malloc0 00:05:50.020 21:01:27 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.020 Malloc1 00:05:50.281 21:01:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.281 /dev/nbd0 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.281 21:01:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:50.281 21:01:28 -- common/autotest_common.sh@857 -- # local i 00:05:50.281 21:01:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:50.281 21:01:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:50.281 21:01:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:50.281 21:01:28 -- common/autotest_common.sh@861 -- # break 00:05:50.281 21:01:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:50.281 21:01:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:50.281 21:01:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.281 1+0 records in 00:05:50.281 1+0 records out 00:05:50.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274168 s, 14.9 MB/s 00:05:50.281 21:01:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.281 21:01:28 -- common/autotest_common.sh@874 -- # size=4096 00:05:50.281 21:01:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.281 21:01:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:50.281 21:01:28 -- common/autotest_common.sh@877 -- # return 0 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.281 21:01:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.542 /dev/nbd1 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.542 21:01:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:50.542 21:01:28 -- common/autotest_common.sh@857 -- # local i 00:05:50.542 21:01:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:50.542 21:01:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:50.542 21:01:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:50.542 21:01:28 -- common/autotest_common.sh@861 -- # break 00:05:50.542 21:01:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:50.542 21:01:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:50.542 21:01:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.542 1+0 records in 00:05:50.542 1+0 records out 00:05:50.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280477 s, 14.6 MB/s 00:05:50.542 21:01:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.542 21:01:28 -- common/autotest_common.sh@874 -- # size=4096 00:05:50.542 21:01:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.542 21:01:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:50.542 21:01:28 -- common/autotest_common.sh@877 -- # return 0 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.542 21:01:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.803 { 00:05:50.803 "nbd_device": "/dev/nbd0", 00:05:50.803 "bdev_name": "Malloc0" 00:05:50.803 }, 00:05:50.803 { 00:05:50.803 "nbd_device": "/dev/nbd1", 00:05:50.803 "bdev_name": "Malloc1" 00:05:50.803 } 00:05:50.803 ]' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.803 { 00:05:50.803 "nbd_device": "/dev/nbd0", 00:05:50.803 "bdev_name": "Malloc0" 00:05:50.803 }, 00:05:50.803 { 00:05:50.803 "nbd_device": "/dev/nbd1", 00:05:50.803 "bdev_name": "Malloc1" 00:05:50.803 } 00:05:50.803 ]' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.803 /dev/nbd1' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.803 /dev/nbd1' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.803 256+0 records in 00:05:50.803 256+0 records out 00:05:50.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117906 s, 88.9 MB/s 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.803 256+0 records in 00:05:50.803 256+0 records out 00:05:50.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160824 s, 65.2 MB/s 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.803 256+0 records in 00:05:50.803 256+0 records out 00:05:50.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166815 s, 62.9 MB/s 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.803 21:01:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@51 -- # local i 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.804 21:01:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@41 -- # break 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.065 21:01:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@41 -- # break 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.065 21:01:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@65 -- # true 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.326 21:01:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.326 21:01:29 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.586 21:01:29 -- event/event.sh@35 -- # sleep 3 00:05:51.586 [2024-06-08 21:01:29.597821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.586 [2024-06-08 21:01:29.660523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.586 [2024-06-08 21:01:29.660682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.846 [2024-06-08 21:01:29.692084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.846 [2024-06-08 21:01:29.692122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.392 21:01:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:54.392 21:01:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.392 spdk_app_start Round 1 00:05:54.392 21:01:32 -- event/event.sh@25 -- # waitforlisten 2164568 /var/tmp/spdk-nbd.sock 00:05:54.392 21:01:32 -- common/autotest_common.sh@819 -- # '[' -z 2164568 ']' 00:05:54.392 21:01:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.392 21:01:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.392 21:01:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.392 21:01:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.393 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:05:54.653 21:01:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.653 21:01:32 -- common/autotest_common.sh@852 -- # return 0 00:05:54.653 21:01:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.915 Malloc0 00:05:54.915 21:01:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.915 Malloc1 00:05:54.915 21:01:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@12 -- # local i 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.915 21:01:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.175 /dev/nbd0 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.175 21:01:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:55.175 21:01:33 -- common/autotest_common.sh@857 -- # local i 00:05:55.175 21:01:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:55.175 21:01:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:55.175 21:01:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:55.175 21:01:33 -- common/autotest_common.sh@861 -- # break 00:05:55.175 21:01:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:55.175 21:01:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:55.175 21:01:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.175 1+0 records in 00:05:55.175 1+0 records out 00:05:55.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211205 s, 19.4 MB/s 00:05:55.175 21:01:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.175 21:01:33 -- common/autotest_common.sh@874 -- # size=4096 00:05:55.175 21:01:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.175 21:01:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:55.175 21:01:33 -- common/autotest_common.sh@877 -- # return 0 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.175 /dev/nbd1 00:05:55.175 21:01:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.444 21:01:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.444 21:01:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:55.445 21:01:33 -- common/autotest_common.sh@857 -- # local i 00:05:55.445 21:01:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:55.445 21:01:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:55.445 21:01:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:55.445 21:01:33 -- common/autotest_common.sh@861 -- # break 00:05:55.445 21:01:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:55.445 21:01:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:55.445 21:01:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.445 1+0 records in 00:05:55.445 1+0 records out 00:05:55.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282976 s, 14.5 MB/s 00:05:55.445 21:01:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.445 21:01:33 -- common/autotest_common.sh@874 -- # size=4096 00:05:55.445 21:01:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.445 21:01:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:55.445 21:01:33 -- common/autotest_common.sh@877 -- # return 0 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.445 { 00:05:55.445 "nbd_device": "/dev/nbd0", 00:05:55.445 "bdev_name": "Malloc0" 00:05:55.445 }, 00:05:55.445 { 00:05:55.445 "nbd_device": "/dev/nbd1", 00:05:55.445 "bdev_name": "Malloc1" 00:05:55.445 } 00:05:55.445 ]' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.445 { 00:05:55.445 "nbd_device": "/dev/nbd0", 00:05:55.445 "bdev_name": "Malloc0" 00:05:55.445 }, 00:05:55.445 { 00:05:55.445 "nbd_device": "/dev/nbd1", 00:05:55.445 "bdev_name": "Malloc1" 00:05:55.445 } 00:05:55.445 ]' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.445 /dev/nbd1' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.445 /dev/nbd1' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.445 21:01:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.446 256+0 records in 00:05:55.446 256+0 records out 00:05:55.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115224 s, 91.0 MB/s 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.446 21:01:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.711 256+0 records in 00:05:55.711 256+0 records out 00:05:55.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181465 s, 57.8 MB/s 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.711 256+0 records in 00:05:55.711 256+0 records out 00:05:55.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173766 s, 60.3 MB/s 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@51 -- # local i 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@41 -- # break 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.711 21:01:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@41 -- # break 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.972 21:01:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@65 -- # true 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.233 21:01:34 -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.233 21:01:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.233 21:01:34 -- event/event.sh@35 -- # sleep 3 00:05:56.494 [2024-06-08 21:01:34.414383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.494 [2024-06-08 21:01:34.476646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.494 [2024-06-08 21:01:34.476649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.494 [2024-06-08 21:01:34.508094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.494 [2024-06-08 21:01:34.508130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.843 21:01:37 -- event/event.sh@23 -- # for i in {0..2} 00:05:59.843 21:01:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.843 spdk_app_start Round 2 00:05:59.843 21:01:37 -- event/event.sh@25 -- # waitforlisten 2164568 /var/tmp/spdk-nbd.sock 00:05:59.843 21:01:37 -- common/autotest_common.sh@819 -- # '[' -z 2164568 ']' 00:05:59.843 21:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.843 21:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.843 21:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.843 21:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.843 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:59.843 21:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.843 21:01:37 -- common/autotest_common.sh@852 -- # return 0 00:05:59.843 21:01:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.843 Malloc0 00:05:59.843 21:01:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.843 Malloc1 00:05:59.843 21:01:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.843 /dev/nbd0 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.843 21:01:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:59.843 21:01:37 -- common/autotest_common.sh@857 -- # local i 00:05:59.843 21:01:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:59.843 21:01:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:59.843 21:01:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:59.843 21:01:37 -- common/autotest_common.sh@861 -- # break 00:05:59.843 21:01:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:59.843 21:01:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:59.843 21:01:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.843 1+0 records in 00:05:59.843 1+0 records out 00:05:59.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000114393 s, 35.8 MB/s 00:05:59.843 21:01:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.843 21:01:37 -- common/autotest_common.sh@874 -- # size=4096 00:05:59.843 21:01:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.843 21:01:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:59.843 21:01:37 -- common/autotest_common.sh@877 -- # return 0 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.843 21:01:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.105 /dev/nbd1 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.105 21:01:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:00.105 21:01:38 -- common/autotest_common.sh@857 -- # local i 00:06:00.105 21:01:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.105 21:01:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.105 21:01:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:00.105 21:01:38 -- common/autotest_common.sh@861 -- # break 00:06:00.105 21:01:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.105 21:01:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.105 21:01:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.105 1+0 records in 00:06:00.105 1+0 records out 00:06:00.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018685 s, 21.9 MB/s 00:06:00.105 21:01:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.105 21:01:38 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.105 21:01:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.105 21:01:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.105 21:01:38 -- common/autotest_common.sh@877 -- # return 0 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.105 21:01:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.366 { 00:06:00.366 "nbd_device": "/dev/nbd0", 00:06:00.366 "bdev_name": "Malloc0" 00:06:00.366 }, 00:06:00.366 { 00:06:00.366 "nbd_device": "/dev/nbd1", 00:06:00.366 "bdev_name": "Malloc1" 00:06:00.366 } 00:06:00.366 ]' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.366 { 00:06:00.366 "nbd_device": "/dev/nbd0", 00:06:00.366 "bdev_name": "Malloc0" 00:06:00.366 }, 00:06:00.366 { 00:06:00.366 "nbd_device": "/dev/nbd1", 00:06:00.366 "bdev_name": "Malloc1" 00:06:00.366 } 00:06:00.366 ]' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.366 /dev/nbd1' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.366 /dev/nbd1' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.366 256+0 records in 00:06:00.366 256+0 records out 00:06:00.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113639 s, 92.3 MB/s 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.366 256+0 records in 00:06:00.366 256+0 records out 00:06:00.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160805 s, 65.2 MB/s 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.366 256+0 records in 00:06:00.366 256+0 records out 00:06:00.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169723 s, 61.8 MB/s 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@51 -- # local i 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.366 21:01:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@41 -- # break 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.627 21:01:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@41 -- # break 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@65 -- # true 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.888 21:01:38 -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.888 21:01:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.148 21:01:39 -- event/event.sh@35 -- # sleep 3 00:06:01.148 [2024-06-08 21:01:39.229044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.409 [2024-06-08 21:01:39.290860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.409 [2024-06-08 21:01:39.290863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.409 [2024-06-08 21:01:39.322228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.409 [2024-06-08 21:01:39.322262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.711 21:01:42 -- event/event.sh@38 -- # waitforlisten 2164568 /var/tmp/spdk-nbd.sock 00:06:04.711 21:01:42 -- common/autotest_common.sh@819 -- # '[' -z 2164568 ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.711 21:01:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.711 21:01:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.711 21:01:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.711 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 21:01:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.711 21:01:42 -- common/autotest_common.sh@852 -- # return 0 00:06:04.711 21:01:42 -- event/event.sh@39 -- # killprocess 2164568 00:06:04.711 21:01:42 -- common/autotest_common.sh@926 -- # '[' -z 2164568 ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@930 -- # kill -0 2164568 00:06:04.711 21:01:42 -- common/autotest_common.sh@931 -- # uname 00:06:04.711 21:01:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2164568 00:06:04.711 21:01:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.711 21:01:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2164568' 00:06:04.711 killing process with pid 2164568 00:06:04.711 21:01:42 -- common/autotest_common.sh@945 -- # kill 2164568 00:06:04.711 21:01:42 -- common/autotest_common.sh@950 -- # wait 2164568 00:06:04.711 spdk_app_start is called in Round 0. 00:06:04.711 Shutdown signal received, stop current app iteration 00:06:04.711 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:04.711 spdk_app_start is called in Round 1. 00:06:04.711 Shutdown signal received, stop current app iteration 00:06:04.711 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:04.711 spdk_app_start is called in Round 2. 00:06:04.711 Shutdown signal received, stop current app iteration 00:06:04.711 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:04.711 spdk_app_start is called in Round 3. 00:06:04.711 Shutdown signal received, stop current app iteration 00:06:04.711 21:01:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.711 21:01:42 -- event/event.sh@42 -- # return 0 00:06:04.711 00:06:04.711 real 0m15.414s 00:06:04.711 user 0m33.140s 00:06:04.711 sys 0m2.025s 00:06:04.711 21:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.711 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 END TEST app_repeat 00:06:04.711 ************************************ 00:06:04.711 21:01:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.711 21:01:42 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.711 21:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.711 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 START TEST cpu_locks 00:06:04.711 ************************************ 00:06:04.711 21:01:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.711 * Looking for test storage... 00:06:04.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:04.711 21:01:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.711 21:01:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.711 21:01:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.711 21:01:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.711 21:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.711 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:04.711 ************************************ 00:06:04.711 START TEST default_locks 00:06:04.711 ************************************ 00:06:04.711 21:01:42 -- common/autotest_common.sh@1104 -- # default_locks 00:06:04.711 21:01:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2167866 00:06:04.711 21:01:42 -- event/cpu_locks.sh@47 -- # waitforlisten 2167866 00:06:04.711 21:01:42 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.711 21:01:42 -- common/autotest_common.sh@819 -- # '[' -z 2167866 ']' 00:06:04.711 21:01:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.711 21:01:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.711 21:01:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.712 21:01:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.712 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:04.712 [2024-06-08 21:01:42.615355] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:04.712 [2024-06-08 21:01:42.615436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167866 ] 00:06:04.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.712 [2024-06-08 21:01:42.681183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.712 [2024-06-08 21:01:42.752017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.712 [2024-06-08 21:01:42.752167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.661 21:01:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.661 21:01:43 -- common/autotest_common.sh@852 -- # return 0 00:06:05.661 21:01:43 -- event/cpu_locks.sh@49 -- # locks_exist 2167866 00:06:05.661 21:01:43 -- event/cpu_locks.sh@22 -- # lslocks -p 2167866 00:06:05.661 21:01:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.922 lslocks: write error 00:06:05.922 21:01:43 -- event/cpu_locks.sh@50 -- # killprocess 2167866 00:06:05.922 21:01:43 -- common/autotest_common.sh@926 -- # '[' -z 2167866 ']' 00:06:05.922 21:01:43 -- common/autotest_common.sh@930 -- # kill -0 2167866 00:06:05.922 21:01:43 -- common/autotest_common.sh@931 -- # uname 00:06:05.922 21:01:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.922 21:01:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2167866 00:06:05.922 21:01:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.922 21:01:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.922 21:01:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2167866' 00:06:05.922 killing process with pid 2167866 00:06:05.922 21:01:43 -- common/autotest_common.sh@945 -- # kill 2167866 00:06:05.922 21:01:43 -- common/autotest_common.sh@950 -- # wait 2167866 00:06:06.183 21:01:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2167866 00:06:06.183 21:01:44 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.183 21:01:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2167866 00:06:06.183 21:01:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:06.183 21:01:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.183 21:01:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:06.183 21:01:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.183 21:01:44 -- common/autotest_common.sh@643 -- # waitforlisten 2167866 00:06:06.183 21:01:44 -- common/autotest_common.sh@819 -- # '[' -z 2167866 ']' 00:06:06.183 21:01:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.183 21:01:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.183 21:01:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.183 21:01:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.183 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2167866) - No such process 00:06:06.183 ERROR: process (pid: 2167866) is no longer running 00:06:06.183 21:01:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.183 21:01:44 -- common/autotest_common.sh@852 -- # return 1 00:06:06.183 21:01:44 -- common/autotest_common.sh@643 -- # es=1 00:06:06.183 21:01:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.183 21:01:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.183 21:01:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.183 21:01:44 -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.183 21:01:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.183 21:01:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.183 21:01:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.183 00:06:06.183 real 0m1.557s 00:06:06.183 user 0m1.655s 00:06:06.183 sys 0m0.527s 00:06:06.183 21:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.183 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.183 ************************************ 00:06:06.183 END TEST default_locks 00:06:06.183 ************************************ 00:06:06.183 21:01:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.183 21:01:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.183 21:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.183 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.183 ************************************ 00:06:06.183 START TEST default_locks_via_rpc 00:06:06.183 ************************************ 00:06:06.183 21:01:44 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:06.183 21:01:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2168234 00:06:06.183 21:01:44 -- event/cpu_locks.sh@63 -- # waitforlisten 2168234 00:06:06.183 21:01:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.183 21:01:44 -- common/autotest_common.sh@819 -- # '[' -z 2168234 ']' 00:06:06.183 21:01:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.183 21:01:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.183 21:01:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.183 21:01:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.183 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.183 [2024-06-08 21:01:44.222798] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.183 [2024-06-08 21:01:44.222856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168234 ] 00:06:06.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.444 [2024-06-08 21:01:44.282164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.444 [2024-06-08 21:01:44.344370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.444 [2024-06-08 21:01:44.344514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.015 21:01:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.015 21:01:44 -- common/autotest_common.sh@852 -- # return 0 00:06:07.015 21:01:44 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.015 21:01:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.015 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.015 21:01:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.015 21:01:44 -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.015 21:01:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.015 21:01:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.015 21:01:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.015 21:01:44 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.015 21:01:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.015 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.015 21:01:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.015 21:01:44 -- event/cpu_locks.sh@71 -- # locks_exist 2168234 00:06:07.015 21:01:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2168234 00:06:07.015 21:01:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.599 21:01:45 -- event/cpu_locks.sh@73 -- # killprocess 2168234 00:06:07.599 21:01:45 -- common/autotest_common.sh@926 -- # '[' -z 2168234 ']' 00:06:07.599 21:01:45 -- common/autotest_common.sh@930 -- # kill -0 2168234 00:06:07.599 21:01:45 -- common/autotest_common.sh@931 -- # uname 00:06:07.599 21:01:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.599 21:01:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2168234 00:06:07.599 21:01:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.599 21:01:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.599 21:01:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2168234' 00:06:07.599 killing process with pid 2168234 00:06:07.599 21:01:45 -- common/autotest_common.sh@945 -- # kill 2168234 00:06:07.599 21:01:45 -- common/autotest_common.sh@950 -- # wait 2168234 00:06:07.599 00:06:07.599 real 0m1.510s 00:06:07.599 user 0m1.615s 00:06:07.599 sys 0m0.499s 00:06:07.599 21:01:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.599 21:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.599 ************************************ 00:06:07.599 END TEST default_locks_via_rpc 00:06:07.599 ************************************ 00:06:07.859 21:01:45 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.859 21:01:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.859 21:01:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.859 21:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.859 ************************************ 00:06:07.859 START TEST non_locking_app_on_locked_coremask 00:06:07.859 ************************************ 00:06:07.859 21:01:45 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:07.859 21:01:45 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2168599 00:06:07.859 21:01:45 -- event/cpu_locks.sh@81 -- # waitforlisten 2168599 /var/tmp/spdk.sock 00:06:07.859 21:01:45 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.859 21:01:45 -- common/autotest_common.sh@819 -- # '[' -z 2168599 ']' 00:06:07.859 21:01:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.859 21:01:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.859 21:01:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.859 21:01:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.859 21:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.859 [2024-06-08 21:01:45.765517] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:07.859 [2024-06-08 21:01:45.765578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168599 ] 00:06:07.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.859 [2024-06-08 21:01:45.823950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.859 [2024-06-08 21:01:45.887703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.859 [2024-06-08 21:01:45.887827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.430 21:01:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.430 21:01:46 -- common/autotest_common.sh@852 -- # return 0 00:06:08.430 21:01:46 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2168741 00:06:08.430 21:01:46 -- event/cpu_locks.sh@85 -- # waitforlisten 2168741 /var/tmp/spdk2.sock 00:06:08.430 21:01:46 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.430 21:01:46 -- common/autotest_common.sh@819 -- # '[' -z 2168741 ']' 00:06:08.430 21:01:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.430 21:01:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.430 21:01:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.430 21:01:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.430 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:06:08.690 [2024-06-08 21:01:46.549082] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:08.691 [2024-06-08 21:01:46.549133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168741 ] 00:06:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.691 [2024-06-08 21:01:46.641939] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.691 [2024-06-08 21:01:46.641968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.691 [2024-06-08 21:01:46.769588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.691 [2024-06-08 21:01:46.769717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.262 21:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.262 21:01:47 -- common/autotest_common.sh@852 -- # return 0 00:06:09.262 21:01:47 -- event/cpu_locks.sh@87 -- # locks_exist 2168599 00:06:09.262 21:01:47 -- event/cpu_locks.sh@22 -- # lslocks -p 2168599 00:06:09.262 21:01:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.833 lslocks: write error 00:06:09.833 21:01:47 -- event/cpu_locks.sh@89 -- # killprocess 2168599 00:06:09.833 21:01:47 -- common/autotest_common.sh@926 -- # '[' -z 2168599 ']' 00:06:09.833 21:01:47 -- common/autotest_common.sh@930 -- # kill -0 2168599 00:06:09.833 21:01:47 -- common/autotest_common.sh@931 -- # uname 00:06:09.833 21:01:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:09.833 21:01:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2168599 00:06:10.093 21:01:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.093 21:01:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.093 21:01:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2168599' 00:06:10.093 killing process with pid 2168599 00:06:10.093 21:01:47 -- common/autotest_common.sh@945 -- # kill 2168599 00:06:10.093 21:01:47 -- common/autotest_common.sh@950 -- # wait 2168599 00:06:10.354 21:01:48 -- event/cpu_locks.sh@90 -- # killprocess 2168741 00:06:10.354 21:01:48 -- common/autotest_common.sh@926 -- # '[' -z 2168741 ']' 00:06:10.354 21:01:48 -- common/autotest_common.sh@930 -- # kill -0 2168741 00:06:10.354 21:01:48 -- common/autotest_common.sh@931 -- # uname 00:06:10.354 21:01:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.354 21:01:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2168741 00:06:10.354 21:01:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.354 21:01:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.354 21:01:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2168741' 00:06:10.354 killing process with pid 2168741 00:06:10.354 21:01:48 -- common/autotest_common.sh@945 -- # kill 2168741 00:06:10.354 21:01:48 -- common/autotest_common.sh@950 -- # wait 2168741 00:06:10.615 00:06:10.615 real 0m2.907s 00:06:10.615 user 0m3.146s 00:06:10.615 sys 0m0.859s 00:06:10.615 21:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.615 21:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:10.615 ************************************ 00:06:10.615 END TEST non_locking_app_on_locked_coremask 00:06:10.615 ************************************ 00:06:10.615 21:01:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:10.615 21:01:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.615 21:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.615 21:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:10.615 ************************************ 00:06:10.615 START TEST locking_app_on_unlocked_coremask 00:06:10.615 ************************************ 00:06:10.615 21:01:48 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:10.615 21:01:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2169310 00:06:10.615 21:01:48 -- event/cpu_locks.sh@99 -- # waitforlisten 2169310 /var/tmp/spdk.sock 00:06:10.615 21:01:48 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:10.615 21:01:48 -- common/autotest_common.sh@819 -- # '[' -z 2169310 ']' 00:06:10.615 21:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.615 21:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.615 21:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.615 21:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.615 21:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:10.876 [2024-06-08 21:01:48.718152] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:10.876 [2024-06-08 21:01:48.718213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169310 ] 00:06:10.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.876 [2024-06-08 21:01:48.777127] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.876 [2024-06-08 21:01:48.777159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.876 [2024-06-08 21:01:48.843114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.876 [2024-06-08 21:01:48.843236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.449 21:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.449 21:01:49 -- common/autotest_common.sh@852 -- # return 0 00:06:11.449 21:01:49 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2169327 00:06:11.449 21:01:49 -- event/cpu_locks.sh@103 -- # waitforlisten 2169327 /var/tmp/spdk2.sock 00:06:11.449 21:01:49 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.449 21:01:49 -- common/autotest_common.sh@819 -- # '[' -z 2169327 ']' 00:06:11.449 21:01:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.449 21:01:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.449 21:01:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.449 21:01:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.449 21:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:11.449 [2024-06-08 21:01:49.516377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:11.449 [2024-06-08 21:01:49.516430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169327 ] 00:06:11.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.709 [2024-06-08 21:01:49.603399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.709 [2024-06-08 21:01:49.730462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.709 [2024-06-08 21:01:49.730593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.280 21:01:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.280 21:01:50 -- common/autotest_common.sh@852 -- # return 0 00:06:12.280 21:01:50 -- event/cpu_locks.sh@105 -- # locks_exist 2169327 00:06:12.280 21:01:50 -- event/cpu_locks.sh@22 -- # lslocks -p 2169327 00:06:12.280 21:01:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.864 lslocks: write error 00:06:12.864 21:01:50 -- event/cpu_locks.sh@107 -- # killprocess 2169310 00:06:12.864 21:01:50 -- common/autotest_common.sh@926 -- # '[' -z 2169310 ']' 00:06:12.864 21:01:50 -- common/autotest_common.sh@930 -- # kill -0 2169310 00:06:12.864 21:01:50 -- common/autotest_common.sh@931 -- # uname 00:06:12.864 21:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.864 21:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169310 00:06:12.865 21:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.865 21:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.865 21:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169310' 00:06:12.865 killing process with pid 2169310 00:06:12.865 21:01:50 -- common/autotest_common.sh@945 -- # kill 2169310 00:06:12.865 21:01:50 -- common/autotest_common.sh@950 -- # wait 2169310 00:06:13.441 21:01:51 -- event/cpu_locks.sh@108 -- # killprocess 2169327 00:06:13.441 21:01:51 -- common/autotest_common.sh@926 -- # '[' -z 2169327 ']' 00:06:13.441 21:01:51 -- common/autotest_common.sh@930 -- # kill -0 2169327 00:06:13.441 21:01:51 -- common/autotest_common.sh@931 -- # uname 00:06:13.441 21:01:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.441 21:01:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169327 00:06:13.441 21:01:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.441 21:01:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.441 21:01:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169327' 00:06:13.441 killing process with pid 2169327 00:06:13.441 21:01:51 -- common/autotest_common.sh@945 -- # kill 2169327 00:06:13.441 21:01:51 -- common/autotest_common.sh@950 -- # wait 2169327 00:06:13.702 00:06:13.702 real 0m2.924s 00:06:13.703 user 0m3.166s 00:06:13.703 sys 0m0.879s 00:06:13.703 21:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.703 21:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.703 ************************************ 00:06:13.703 END TEST locking_app_on_unlocked_coremask 00:06:13.703 ************************************ 00:06:13.703 21:01:51 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.703 21:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.703 21:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.703 21:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.703 ************************************ 00:06:13.703 START TEST locking_app_on_locked_coremask 00:06:13.703 ************************************ 00:06:13.703 21:01:51 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:13.703 21:01:51 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2169861 00:06:13.703 21:01:51 -- event/cpu_locks.sh@116 -- # waitforlisten 2169861 /var/tmp/spdk.sock 00:06:13.703 21:01:51 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.703 21:01:51 -- common/autotest_common.sh@819 -- # '[' -z 2169861 ']' 00:06:13.703 21:01:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.703 21:01:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.703 21:01:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.703 21:01:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.703 21:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.703 [2024-06-08 21:01:51.695850] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:13.703 [2024-06-08 21:01:51.695918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169861 ] 00:06:13.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.703 [2024-06-08 21:01:51.756544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.963 [2024-06-08 21:01:51.825272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.963 [2024-06-08 21:01:51.825408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.537 21:01:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.537 21:01:52 -- common/autotest_common.sh@852 -- # return 0 00:06:14.537 21:01:52 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.537 21:01:52 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2170039 00:06:14.537 21:01:52 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2170039 /var/tmp/spdk2.sock 00:06:14.537 21:01:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:14.537 21:01:52 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2170039 /var/tmp/spdk2.sock 00:06:14.537 21:01:52 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:14.537 21:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:14.537 21:01:52 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:14.537 21:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:14.537 21:01:52 -- common/autotest_common.sh@643 -- # waitforlisten 2170039 /var/tmp/spdk2.sock 00:06:14.537 21:01:52 -- common/autotest_common.sh@819 -- # '[' -z 2170039 ']' 00:06:14.537 21:01:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.537 21:01:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.537 21:01:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.537 21:01:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.537 21:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.537 [2024-06-08 21:01:52.470049] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.537 [2024-06-08 21:01:52.470096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170039 ] 00:06:14.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.537 [2024-06-08 21:01:52.558266] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2169861 has claimed it. 00:06:14.537 [2024-06-08 21:01:52.558304] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2170039) - No such process 00:06:15.107 ERROR: process (pid: 2170039) is no longer running 00:06:15.107 21:01:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.107 21:01:53 -- common/autotest_common.sh@852 -- # return 1 00:06:15.107 21:01:53 -- common/autotest_common.sh@643 -- # es=1 00:06:15.107 21:01:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.107 21:01:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:15.107 21:01:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.107 21:01:53 -- event/cpu_locks.sh@122 -- # locks_exist 2169861 00:06:15.107 21:01:53 -- event/cpu_locks.sh@22 -- # lslocks -p 2169861 00:06:15.107 21:01:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.747 lslocks: write error 00:06:15.747 21:01:53 -- event/cpu_locks.sh@124 -- # killprocess 2169861 00:06:15.747 21:01:53 -- common/autotest_common.sh@926 -- # '[' -z 2169861 ']' 00:06:15.747 21:01:53 -- common/autotest_common.sh@930 -- # kill -0 2169861 00:06:15.747 21:01:53 -- common/autotest_common.sh@931 -- # uname 00:06:15.747 21:01:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.747 21:01:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169861 00:06:15.748 21:01:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.748 21:01:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.748 21:01:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169861' 00:06:15.748 killing process with pid 2169861 00:06:15.748 21:01:53 -- common/autotest_common.sh@945 -- # kill 2169861 00:06:15.748 21:01:53 -- common/autotest_common.sh@950 -- # wait 2169861 00:06:15.748 00:06:15.748 real 0m2.112s 00:06:15.748 user 0m2.338s 00:06:15.748 sys 0m0.547s 00:06:15.748 21:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.748 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.748 ************************************ 00:06:15.748 END TEST locking_app_on_locked_coremask 00:06:15.748 ************************************ 00:06:15.748 21:01:53 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.748 21:01:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.748 21:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.748 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.748 ************************************ 00:06:15.748 START TEST locking_overlapped_coremask 00:06:15.748 ************************************ 00:06:15.748 21:01:53 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:15.748 21:01:53 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2170401 00:06:15.748 21:01:53 -- event/cpu_locks.sh@133 -- # waitforlisten 2170401 /var/tmp/spdk.sock 00:06:15.748 21:01:53 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.748 21:01:53 -- common/autotest_common.sh@819 -- # '[' -z 2170401 ']' 00:06:15.748 21:01:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.748 21:01:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.748 21:01:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.748 21:01:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.748 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.015 [2024-06-08 21:01:53.839735] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.015 [2024-06-08 21:01:53.839795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170401 ] 00:06:16.015 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.015 [2024-06-08 21:01:53.898708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.015 [2024-06-08 21:01:53.965180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.015 [2024-06-08 21:01:53.965436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.015 [2024-06-08 21:01:53.965559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.015 [2024-06-08 21:01:53.965563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.587 21:01:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.587 21:01:54 -- common/autotest_common.sh@852 -- # return 0 00:06:16.587 21:01:54 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2170424 00:06:16.587 21:01:54 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2170424 /var/tmp/spdk2.sock 00:06:16.587 21:01:54 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.587 21:01:54 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.587 21:01:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2170424 /var/tmp/spdk2.sock 00:06:16.587 21:01:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:16.587 21:01:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.587 21:01:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:16.587 21:01:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.587 21:01:54 -- common/autotest_common.sh@643 -- # waitforlisten 2170424 /var/tmp/spdk2.sock 00:06:16.587 21:01:54 -- common/autotest_common.sh@819 -- # '[' -z 2170424 ']' 00:06:16.587 21:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.587 21:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.587 21:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.587 21:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.587 21:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:16.587 [2024-06-08 21:01:54.662025] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.587 [2024-06-08 21:01:54.662078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170424 ] 00:06:16.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.848 [2024-06-08 21:01:54.733287] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2170401 has claimed it. 00:06:16.848 [2024-06-08 21:01:54.733318] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2170424) - No such process 00:06:17.420 ERROR: process (pid: 2170424) is no longer running 00:06:17.420 21:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.420 21:01:55 -- common/autotest_common.sh@852 -- # return 1 00:06:17.420 21:01:55 -- common/autotest_common.sh@643 -- # es=1 00:06:17.420 21:01:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.420 21:01:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.420 21:01:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.420 21:01:55 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.420 21:01:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.420 21:01:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.420 21:01:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.420 21:01:55 -- event/cpu_locks.sh@141 -- # killprocess 2170401 00:06:17.420 21:01:55 -- common/autotest_common.sh@926 -- # '[' -z 2170401 ']' 00:06:17.420 21:01:55 -- common/autotest_common.sh@930 -- # kill -0 2170401 00:06:17.420 21:01:55 -- common/autotest_common.sh@931 -- # uname 00:06:17.420 21:01:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.420 21:01:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170401 00:06:17.420 21:01:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.420 21:01:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.420 21:01:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170401' 00:06:17.420 killing process with pid 2170401 00:06:17.420 21:01:55 -- common/autotest_common.sh@945 -- # kill 2170401 00:06:17.420 21:01:55 -- common/autotest_common.sh@950 -- # wait 2170401 00:06:17.681 00:06:17.681 real 0m1.741s 00:06:17.681 user 0m4.947s 00:06:17.681 sys 0m0.360s 00:06:17.681 21:01:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.681 21:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.681 ************************************ 00:06:17.681 END TEST locking_overlapped_coremask 00:06:17.681 ************************************ 00:06:17.681 21:01:55 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.681 21:01:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.681 21:01:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.681 21:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.681 ************************************ 00:06:17.681 START TEST locking_overlapped_coremask_via_rpc 00:06:17.681 ************************************ 00:06:17.681 21:01:55 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:17.681 21:01:55 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2170783 00:06:17.681 21:01:55 -- event/cpu_locks.sh@149 -- # waitforlisten 2170783 /var/tmp/spdk.sock 00:06:17.681 21:01:55 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.681 21:01:55 -- common/autotest_common.sh@819 -- # '[' -z 2170783 ']' 00:06:17.681 21:01:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.681 21:01:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.681 21:01:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.681 21:01:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.681 21:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.681 [2024-06-08 21:01:55.624408] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.681 [2024-06-08 21:01:55.624465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170783 ] 00:06:17.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.681 [2024-06-08 21:01:55.682314] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.681 [2024-06-08 21:01:55.682341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.681 [2024-06-08 21:01:55.745714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.681 [2024-06-08 21:01:55.745963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.681 [2024-06-08 21:01:55.746078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.681 [2024-06-08 21:01:55.746081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.624 21:01:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.624 21:01:56 -- common/autotest_common.sh@852 -- # return 0 00:06:18.624 21:01:56 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2170806 00:06:18.624 21:01:56 -- event/cpu_locks.sh@153 -- # waitforlisten 2170806 /var/tmp/spdk2.sock 00:06:18.624 21:01:56 -- common/autotest_common.sh@819 -- # '[' -z 2170806 ']' 00:06:18.624 21:01:56 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.624 21:01:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.624 21:01:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.624 21:01:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.624 21:01:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.624 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.624 [2024-06-08 21:01:56.436309] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:18.624 [2024-06-08 21:01:56.436362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170806 ] 00:06:18.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.624 [2024-06-08 21:01:56.506251] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.624 [2024-06-08 21:01:56.506273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.624 [2024-06-08 21:01:56.609941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.624 [2024-06-08 21:01:56.610172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.624 [2024-06-08 21:01:56.613521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.624 [2024-06-08 21:01:56.613525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.195 21:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.196 21:01:57 -- common/autotest_common.sh@852 -- # return 0 00:06:19.196 21:01:57 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.196 21:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.196 21:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 21:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.196 21:01:57 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.196 21:01:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.196 21:01:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.196 21:01:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:19.196 21:01:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.196 21:01:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:19.196 21:01:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.196 21:01:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.196 21:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.196 21:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.196 [2024-06-08 21:01:57.217471] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2170783 has claimed it. 00:06:19.196 request: 00:06:19.196 { 00:06:19.196 "method": "framework_enable_cpumask_locks", 00:06:19.196 "req_id": 1 00:06:19.196 } 00:06:19.196 Got JSON-RPC error response 00:06:19.196 response: 00:06:19.196 { 00:06:19.196 "code": -32603, 00:06:19.196 "message": "Failed to claim CPU core: 2" 00:06:19.196 } 00:06:19.196 21:01:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:19.196 21:01:57 -- common/autotest_common.sh@643 -- # es=1 00:06:19.196 21:01:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.196 21:01:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.196 21:01:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.196 21:01:57 -- event/cpu_locks.sh@158 -- # waitforlisten 2170783 /var/tmp/spdk.sock 00:06:19.196 21:01:57 -- common/autotest_common.sh@819 -- # '[' -z 2170783 ']' 00:06:19.196 21:01:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.196 21:01:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.196 21:01:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.196 21:01:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.196 21:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.457 21:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.457 21:01:57 -- common/autotest_common.sh@852 -- # return 0 00:06:19.457 21:01:57 -- event/cpu_locks.sh@159 -- # waitforlisten 2170806 /var/tmp/spdk2.sock 00:06:19.457 21:01:57 -- common/autotest_common.sh@819 -- # '[' -z 2170806 ']' 00:06:19.457 21:01:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.457 21:01:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.457 21:01:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.457 21:01:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.457 21:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.457 21:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.457 21:01:57 -- common/autotest_common.sh@852 -- # return 0 00:06:19.457 21:01:57 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.457 21:01:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.457 21:01:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.457 21:01:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.457 00:06:19.457 real 0m1.968s 00:06:19.457 user 0m0.745s 00:06:19.457 sys 0m0.148s 00:06:19.457 21:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.457 21:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.457 ************************************ 00:06:19.457 END TEST locking_overlapped_coremask_via_rpc 00:06:19.457 ************************************ 00:06:19.718 21:01:57 -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.718 21:01:57 -- event/cpu_locks.sh@15 -- # [[ -z 2170783 ]] 00:06:19.718 21:01:57 -- event/cpu_locks.sh@15 -- # killprocess 2170783 00:06:19.718 21:01:57 -- common/autotest_common.sh@926 -- # '[' -z 2170783 ']' 00:06:19.718 21:01:57 -- common/autotest_common.sh@930 -- # kill -0 2170783 00:06:19.718 21:01:57 -- common/autotest_common.sh@931 -- # uname 00:06:19.718 21:01:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.718 21:01:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170783 00:06:19.718 21:01:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:19.718 21:01:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:19.718 21:01:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170783' 00:06:19.718 killing process with pid 2170783 00:06:19.718 21:01:57 -- common/autotest_common.sh@945 -- # kill 2170783 00:06:19.718 21:01:57 -- common/autotest_common.sh@950 -- # wait 2170783 00:06:19.978 21:01:57 -- event/cpu_locks.sh@16 -- # [[ -z 2170806 ]] 00:06:19.978 21:01:57 -- event/cpu_locks.sh@16 -- # killprocess 2170806 00:06:19.978 21:01:57 -- common/autotest_common.sh@926 -- # '[' -z 2170806 ']' 00:06:19.978 21:01:57 -- common/autotest_common.sh@930 -- # kill -0 2170806 00:06:19.979 21:01:57 -- common/autotest_common.sh@931 -- # uname 00:06:19.979 21:01:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.979 21:01:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170806 00:06:19.979 21:01:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:19.979 21:01:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:19.979 21:01:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170806' 00:06:19.979 killing process with pid 2170806 00:06:19.979 21:01:57 -- common/autotest_common.sh@945 -- # kill 2170806 00:06:19.979 21:01:57 -- common/autotest_common.sh@950 -- # wait 2170806 00:06:20.240 21:01:58 -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.240 21:01:58 -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.240 21:01:58 -- event/cpu_locks.sh@15 -- # [[ -z 2170783 ]] 00:06:20.240 21:01:58 -- event/cpu_locks.sh@15 -- # killprocess 2170783 00:06:20.240 21:01:58 -- common/autotest_common.sh@926 -- # '[' -z 2170783 ']' 00:06:20.240 21:01:58 -- common/autotest_common.sh@930 -- # kill -0 2170783 00:06:20.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2170783) - No such process 00:06:20.240 21:01:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2170783 is not found' 00:06:20.240 Process with pid 2170783 is not found 00:06:20.240 21:01:58 -- event/cpu_locks.sh@16 -- # [[ -z 2170806 ]] 00:06:20.240 21:01:58 -- event/cpu_locks.sh@16 -- # killprocess 2170806 00:06:20.240 21:01:58 -- common/autotest_common.sh@926 -- # '[' -z 2170806 ']' 00:06:20.240 21:01:58 -- common/autotest_common.sh@930 -- # kill -0 2170806 00:06:20.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2170806) - No such process 00:06:20.240 21:01:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2170806 is not found' 00:06:20.240 Process with pid 2170806 is not found 00:06:20.240 21:01:58 -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.240 00:06:20.240 real 0m15.640s 00:06:20.240 user 0m26.974s 00:06:20.240 sys 0m4.560s 00:06:20.240 21:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.240 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.240 ************************************ 00:06:20.240 END TEST cpu_locks 00:06:20.240 ************************************ 00:06:20.240 00:06:20.240 real 0m41.310s 00:06:20.240 user 1m20.751s 00:06:20.240 sys 0m7.397s 00:06:20.240 21:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.240 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.240 ************************************ 00:06:20.240 END TEST event 00:06:20.240 ************************************ 00:06:20.240 21:01:58 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:20.240 21:01:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.240 21:01:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.240 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.240 ************************************ 00:06:20.240 START TEST thread 00:06:20.240 ************************************ 00:06:20.240 21:01:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:20.240 * Looking for test storage... 00:06:20.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:20.240 21:01:58 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.240 21:01:58 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:20.240 21:01:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.240 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.240 ************************************ 00:06:20.240 START TEST thread_poller_perf 00:06:20.240 ************************************ 00:06:20.240 21:01:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.240 [2024-06-08 21:01:58.285015] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.240 [2024-06-08 21:01:58.285068] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171311 ] 00:06:20.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.500 [2024-06-08 21:01:58.336415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.500 [2024-06-08 21:01:58.399600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.500 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:21.440 ====================================== 00:06:21.440 busy:2412501220 (cyc) 00:06:21.440 total_run_count: 276000 00:06:21.440 tsc_hz: 2400000000 (cyc) 00:06:21.440 ====================================== 00:06:21.440 poller_cost: 8740 (cyc), 3641 (nsec) 00:06:21.440 00:06:21.440 real 0m1.184s 00:06:21.440 user 0m1.126s 00:06:21.440 sys 0m0.054s 00:06:21.440 21:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.440 21:01:59 -- common/autotest_common.sh@10 -- # set +x 00:06:21.440 ************************************ 00:06:21.440 END TEST thread_poller_perf 00:06:21.440 ************************************ 00:06:21.440 21:01:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.440 21:01:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:21.440 21:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.440 21:01:59 -- common/autotest_common.sh@10 -- # set +x 00:06:21.440 ************************************ 00:06:21.440 START TEST thread_poller_perf 00:06:21.440 ************************************ 00:06:21.440 21:01:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.440 [2024-06-08 21:01:59.527085] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.440 [2024-06-08 21:01:59.527195] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171587 ] 00:06:21.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.701 [2024-06-08 21:01:59.606289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.701 [2024-06-08 21:01:59.672295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.701 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:22.642 ====================================== 00:06:22.642 busy:2402775190 (cyc) 00:06:22.642 total_run_count: 3791000 00:06:22.642 tsc_hz: 2400000000 (cyc) 00:06:22.642 ====================================== 00:06:22.642 poller_cost: 633 (cyc), 263 (nsec) 00:06:22.642 00:06:22.642 real 0m1.221s 00:06:22.642 user 0m1.135s 00:06:22.642 sys 0m0.081s 00:06:22.642 21:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.642 21:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.642 ************************************ 00:06:22.642 END TEST thread_poller_perf 00:06:22.642 ************************************ 00:06:22.904 21:02:00 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.904 00:06:22.904 real 0m2.587s 00:06:22.904 user 0m2.332s 00:06:22.904 sys 0m0.267s 00:06:22.904 21:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.904 21:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.904 ************************************ 00:06:22.904 END TEST thread 00:06:22.904 ************************************ 00:06:22.904 21:02:00 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:22.904 21:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.904 21:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.904 21:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.904 ************************************ 00:06:22.904 START TEST accel 00:06:22.904 ************************************ 00:06:22.904 21:02:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:22.904 * Looking for test storage... 00:06:22.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:22.904 21:02:00 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:22.904 21:02:00 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:22.904 21:02:00 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.904 21:02:00 -- accel/accel.sh@59 -- # spdk_tgt_pid=2171976 00:06:22.904 21:02:00 -- accel/accel.sh@60 -- # waitforlisten 2171976 00:06:22.904 21:02:00 -- common/autotest_common.sh@819 -- # '[' -z 2171976 ']' 00:06:22.904 21:02:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.904 21:02:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.904 21:02:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.904 21:02:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.904 21:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.904 21:02:00 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:22.904 21:02:00 -- accel/accel.sh@58 -- # build_accel_config 00:06:22.904 21:02:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.904 21:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.904 21:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.904 21:02:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.904 21:02:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.904 21:02:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.904 21:02:00 -- accel/accel.sh@42 -- # jq -r . 00:06:22.904 [2024-06-08 21:02:00.962611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:22.904 [2024-06-08 21:02:00.962676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171976 ] 00:06:22.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.165 [2024-06-08 21:02:01.021313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.165 [2024-06-08 21:02:01.085663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.165 [2024-06-08 21:02:01.085792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.740 21:02:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.740 21:02:01 -- common/autotest_common.sh@852 -- # return 0 00:06:23.740 21:02:01 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:23.740 21:02:01 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:23.740 21:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.740 21:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:23.740 21:02:01 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:23.740 21:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.740 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.740 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.740 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # IFS== 00:06:23.741 21:02:01 -- accel/accel.sh@64 -- # read -r opc module 00:06:23.741 21:02:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:23.741 21:02:01 -- accel/accel.sh@67 -- # killprocess 2171976 00:06:23.741 21:02:01 -- common/autotest_common.sh@926 -- # '[' -z 2171976 ']' 00:06:23.741 21:02:01 -- common/autotest_common.sh@930 -- # kill -0 2171976 00:06:23.741 21:02:01 -- common/autotest_common.sh@931 -- # uname 00:06:23.741 21:02:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.741 21:02:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2171976 00:06:23.741 21:02:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.741 21:02:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.741 21:02:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2171976' 00:06:23.741 killing process with pid 2171976 00:06:23.741 21:02:01 -- common/autotest_common.sh@945 -- # kill 2171976 00:06:23.741 21:02:01 -- common/autotest_common.sh@950 -- # wait 2171976 00:06:24.005 21:02:01 -- accel/accel.sh@68 -- # trap - ERR 00:06:24.005 21:02:01 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:24.005 21:02:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:24.005 21:02:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.005 21:02:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.005 21:02:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:24.005 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:24.005 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.005 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.005 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.005 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.005 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.005 21:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.005 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.005 21:02:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:24.005 21:02:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:24.005 21:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.005 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.005 ************************************ 00:06:24.005 START TEST accel_missing_filename 00:06:24.005 ************************************ 00:06:24.005 21:02:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:24.005 21:02:02 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.005 21:02:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:24.005 21:02:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:24.005 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.005 21:02:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:24.005 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.005 21:02:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:24.005 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:24.005 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.005 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.005 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.005 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.005 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.005 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.265 [2024-06-08 21:02:02.098974] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.265 [2024-06-08 21:02:02.099047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172342 ] 00:06:24.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.265 [2024-06-08 21:02:02.159899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.265 [2024-06-08 21:02:02.225618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.265 [2024-06-08 21:02:02.257506] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.265 [2024-06-08 21:02:02.294392] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:24.265 A filename is required. 00:06:24.265 21:02:02 -- common/autotest_common.sh@643 -- # es=234 00:06:24.265 21:02:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.265 21:02:02 -- common/autotest_common.sh@652 -- # es=106 00:06:24.265 21:02:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:24.265 21:02:02 -- common/autotest_common.sh@660 -- # es=1 00:06:24.265 21:02:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.265 00:06:24.265 real 0m0.273s 00:06:24.265 user 0m0.214s 00:06:24.265 sys 0m0.097s 00:06:24.265 21:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.265 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.265 ************************************ 00:06:24.265 END TEST accel_missing_filename 00:06:24.265 ************************************ 00:06:24.526 21:02:02 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.526 21:02:02 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:24.526 21:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.526 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 ************************************ 00:06:24.526 START TEST accel_compress_verify 00:06:24.526 ************************************ 00:06:24.526 21:02:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.526 21:02:02 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.526 21:02:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.526 21:02:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:24.526 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.526 21:02:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:24.526 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.526 21:02:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.526 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.526 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.526 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.526 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.526 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.526 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.526 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.526 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.526 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.526 [2024-06-08 21:02:02.410577] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.526 [2024-06-08 21:02:02.410673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172371 ] 00:06:24.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.526 [2024-06-08 21:02:02.471859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.526 [2024-06-08 21:02:02.534315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.526 [2024-06-08 21:02:02.566008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.526 [2024-06-08 21:02:02.602892] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:24.788 00:06:24.788 Compression does not support the verify option, aborting. 00:06:24.788 21:02:02 -- common/autotest_common.sh@643 -- # es=161 00:06:24.788 21:02:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.788 21:02:02 -- common/autotest_common.sh@652 -- # es=33 00:06:24.788 21:02:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:24.788 21:02:02 -- common/autotest_common.sh@660 -- # es=1 00:06:24.788 21:02:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.788 00:06:24.788 real 0m0.272s 00:06:24.788 user 0m0.213s 00:06:24.788 sys 0m0.101s 00:06:24.788 21:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.788 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 ************************************ 00:06:24.788 END TEST accel_compress_verify 00:06:24.788 ************************************ 00:06:24.788 21:02:02 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:24.788 21:02:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:24.788 21:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.788 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 ************************************ 00:06:24.788 START TEST accel_wrong_workload 00:06:24.788 ************************************ 00:06:24.788 21:02:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:24.788 21:02:02 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.788 21:02:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:24.788 21:02:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.788 21:02:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:24.788 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:24.788 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.788 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.788 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.788 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.788 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.788 Unsupported workload type: foobar 00:06:24.788 [2024-06-08 21:02:02.716720] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:24.788 accel_perf options: 00:06:24.788 [-h help message] 00:06:24.788 [-q queue depth per core] 00:06:24.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:24.788 [-T number of threads per core 00:06:24.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:24.788 [-t time in seconds] 00:06:24.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:24.788 [ dif_verify, , dif_generate, dif_generate_copy 00:06:24.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:24.788 [-l for compress/decompress workloads, name of uncompressed input file 00:06:24.788 [-S for crc32c workload, use this seed value (default 0) 00:06:24.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:24.788 [-f for fill workload, use this BYTE value (default 255) 00:06:24.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:24.788 [-y verify result if this switch is on] 00:06:24.788 [-a tasks to allocate per core (default: same value as -q)] 00:06:24.788 Can be used to spread operations across a wider range of memory. 00:06:24.788 21:02:02 -- common/autotest_common.sh@643 -- # es=1 00:06:24.788 21:02:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.788 21:02:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.788 21:02:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.788 00:06:24.788 real 0m0.031s 00:06:24.788 user 0m0.018s 00:06:24.788 sys 0m0.013s 00:06:24.788 21:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.788 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 ************************************ 00:06:24.788 END TEST accel_wrong_workload 00:06:24.788 ************************************ 00:06:24.788 Error: writing output failed: Broken pipe 00:06:24.788 21:02:02 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:24.788 21:02:02 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:24.788 21:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.788 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 ************************************ 00:06:24.788 START TEST accel_negative_buffers 00:06:24.788 ************************************ 00:06:24.788 21:02:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:24.788 21:02:02 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.788 21:02:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:24.788 21:02:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:24.788 21:02:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.788 21:02:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:24.788 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:24.788 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.788 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.788 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.788 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.788 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.788 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.788 -x option must be non-negative. 00:06:24.788 [2024-06-08 21:02:02.791224] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:24.788 accel_perf options: 00:06:24.788 [-h help message] 00:06:24.788 [-q queue depth per core] 00:06:24.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:24.788 [-T number of threads per core 00:06:24.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:24.788 [-t time in seconds] 00:06:24.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:24.788 [ dif_verify, , dif_generate, dif_generate_copy 00:06:24.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:24.788 [-l for compress/decompress workloads, name of uncompressed input file 00:06:24.788 [-S for crc32c workload, use this seed value (default 0) 00:06:24.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:24.788 [-f for fill workload, use this BYTE value (default 255) 00:06:24.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:24.788 [-y verify result if this switch is on] 00:06:24.788 [-a tasks to allocate per core (default: same value as -q)] 00:06:24.788 Can be used to spread operations across a wider range of memory. 00:06:24.788 21:02:02 -- common/autotest_common.sh@643 -- # es=1 00:06:24.788 21:02:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.788 21:02:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.788 21:02:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.789 00:06:24.789 real 0m0.035s 00:06:24.789 user 0m0.025s 00:06:24.789 sys 0m0.010s 00:06:24.789 21:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.789 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.789 ************************************ 00:06:24.789 END TEST accel_negative_buffers 00:06:24.789 ************************************ 00:06:24.789 Error: writing output failed: Broken pipe 00:06:24.789 21:02:02 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:24.789 21:02:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:24.789 21:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.789 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.789 ************************************ 00:06:24.789 START TEST accel_crc32c 00:06:24.789 ************************************ 00:06:24.789 21:02:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:24.789 21:02:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.789 21:02:02 -- accel/accel.sh@17 -- # local accel_module 00:06:24.789 21:02:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:24.789 21:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:24.789 21:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.789 21:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.789 21:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.789 21:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.789 21:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.789 21:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.789 21:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.789 21:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:24.789 [2024-06-08 21:02:02.868163] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.789 [2024-06-08 21:02:02.868230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172429 ] 00:06:25.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.049 [2024-06-08 21:02:02.928800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.049 [2024-06-08 21:02:02.990230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.433 21:02:04 -- accel/accel.sh@18 -- # out=' 00:06:26.433 SPDK Configuration: 00:06:26.433 Core mask: 0x1 00:06:26.433 00:06:26.433 Accel Perf Configuration: 00:06:26.433 Workload Type: crc32c 00:06:26.433 CRC-32C seed: 32 00:06:26.433 Transfer size: 4096 bytes 00:06:26.433 Vector count 1 00:06:26.433 Module: software 00:06:26.433 Queue depth: 32 00:06:26.433 Allocate depth: 32 00:06:26.433 # threads/core: 1 00:06:26.433 Run time: 1 seconds 00:06:26.433 Verify: Yes 00:06:26.433 00:06:26.433 Running for 1 seconds... 00:06:26.433 00:06:26.433 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.433 ------------------------------------------------------------------------------------ 00:06:26.433 0,0 448928/s 1753 MiB/s 0 0 00:06:26.433 ==================================================================================== 00:06:26.433 Total 448928/s 1753 MiB/s 0 0' 00:06:26.433 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.433 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.433 21:02:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:26.433 21:02:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:26.433 21:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.433 21:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.433 21:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.433 21:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.433 21:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.433 21:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.433 21:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.433 21:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:26.433 [2024-06-08 21:02:04.141741] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:26.433 [2024-06-08 21:02:04.141846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172764 ] 00:06:26.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.433 [2024-06-08 21:02:04.201958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.433 [2024-06-08 21:02:04.264369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.433 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=0x1 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=crc32c 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=32 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=software 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=32 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=32 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=1 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val=Yes 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:26.434 21:02:04 -- accel/accel.sh@21 -- # val= 00:06:26.434 21:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # IFS=: 00:06:26.434 21:02:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@21 -- # val= 00:06:27.377 21:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # IFS=: 00:06:27.377 21:02:05 -- accel/accel.sh@20 -- # read -r var val 00:06:27.377 21:02:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.377 21:02:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:27.377 21:02:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.377 00:06:27.377 real 0m2.549s 00:06:27.377 user 0m2.364s 00:06:27.377 sys 0m0.179s 00:06:27.377 21:02:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.377 21:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:27.377 ************************************ 00:06:27.377 END TEST accel_crc32c 00:06:27.377 ************************************ 00:06:27.377 21:02:05 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:27.377 21:02:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:27.377 21:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.377 21:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:27.377 ************************************ 00:06:27.377 START TEST accel_crc32c_C2 00:06:27.377 ************************************ 00:06:27.377 21:02:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:27.377 21:02:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.377 21:02:05 -- accel/accel.sh@17 -- # local accel_module 00:06:27.377 21:02:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:27.377 21:02:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:27.377 21:02:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.377 21:02:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.377 21:02:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.377 21:02:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.377 21:02:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.377 21:02:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.377 21:02:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.377 21:02:05 -- accel/accel.sh@42 -- # jq -r . 00:06:27.377 [2024-06-08 21:02:05.452676] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.377 [2024-06-08 21:02:05.452750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173119 ] 00:06:27.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.640 [2024-06-08 21:02:05.513179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.640 [2024-06-08 21:02:05.576588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.025 21:02:06 -- accel/accel.sh@18 -- # out=' 00:06:29.025 SPDK Configuration: 00:06:29.025 Core mask: 0x1 00:06:29.025 00:06:29.025 Accel Perf Configuration: 00:06:29.025 Workload Type: crc32c 00:06:29.025 CRC-32C seed: 0 00:06:29.025 Transfer size: 4096 bytes 00:06:29.025 Vector count 2 00:06:29.025 Module: software 00:06:29.025 Queue depth: 32 00:06:29.025 Allocate depth: 32 00:06:29.025 # threads/core: 1 00:06:29.025 Run time: 1 seconds 00:06:29.025 Verify: Yes 00:06:29.025 00:06:29.025 Running for 1 seconds... 00:06:29.025 00:06:29.025 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.025 ------------------------------------------------------------------------------------ 00:06:29.025 0,0 376992/s 2945 MiB/s 0 0 00:06:29.025 ==================================================================================== 00:06:29.025 Total 376992/s 1472 MiB/s 0 0' 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:29.025 21:02:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:29.025 21:02:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.025 21:02:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.025 21:02:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.025 21:02:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.025 21:02:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.025 21:02:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.025 21:02:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.025 21:02:06 -- accel/accel.sh@42 -- # jq -r . 00:06:29.025 [2024-06-08 21:02:06.728098] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.025 [2024-06-08 21:02:06.728199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173239 ] 00:06:29.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.025 [2024-06-08 21:02:06.788110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.025 [2024-06-08 21:02:06.850455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=0x1 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=0 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=software 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=32 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=32 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=1 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val=Yes 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.025 21:02:06 -- accel/accel.sh@21 -- # val= 00:06:29.025 21:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.025 21:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@21 -- # val= 00:06:29.967 21:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:29.967 21:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:29.967 21:02:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.967 21:02:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:29.967 21:02:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.967 00:06:29.967 real 0m2.549s 00:06:29.967 user 0m2.350s 00:06:29.967 sys 0m0.194s 00:06:29.967 21:02:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.967 21:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:29.967 ************************************ 00:06:29.967 END TEST accel_crc32c_C2 00:06:29.967 ************************************ 00:06:29.967 21:02:08 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:29.967 21:02:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:29.967 21:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.967 21:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.967 ************************************ 00:06:29.967 START TEST accel_copy 00:06:29.967 ************************************ 00:06:29.967 21:02:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:29.967 21:02:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.967 21:02:08 -- accel/accel.sh@17 -- # local accel_module 00:06:29.967 21:02:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:29.967 21:02:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:29.967 21:02:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.967 21:02:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.967 21:02:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.967 21:02:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.967 21:02:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.967 21:02:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.967 21:02:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.967 21:02:08 -- accel/accel.sh@42 -- # jq -r . 00:06:29.967 [2024-06-08 21:02:08.038131] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.967 [2024-06-08 21:02:08.038232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173487 ] 00:06:30.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.228 [2024-06-08 21:02:08.106847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.228 [2024-06-08 21:02:08.170544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.643 21:02:09 -- accel/accel.sh@18 -- # out=' 00:06:31.643 SPDK Configuration: 00:06:31.643 Core mask: 0x1 00:06:31.643 00:06:31.643 Accel Perf Configuration: 00:06:31.643 Workload Type: copy 00:06:31.643 Transfer size: 4096 bytes 00:06:31.643 Vector count 1 00:06:31.643 Module: software 00:06:31.643 Queue depth: 32 00:06:31.643 Allocate depth: 32 00:06:31.643 # threads/core: 1 00:06:31.643 Run time: 1 seconds 00:06:31.643 Verify: Yes 00:06:31.643 00:06:31.643 Running for 1 seconds... 00:06:31.643 00:06:31.643 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.643 ------------------------------------------------------------------------------------ 00:06:31.643 0,0 304000/s 1187 MiB/s 0 0 00:06:31.643 ==================================================================================== 00:06:31.643 Total 304000/s 1187 MiB/s 0 0' 00:06:31.643 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.643 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.643 21:02:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:31.643 21:02:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:31.643 21:02:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.644 21:02:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.644 21:02:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.644 21:02:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.644 21:02:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.644 21:02:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.644 21:02:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.644 21:02:09 -- accel/accel.sh@42 -- # jq -r . 00:06:31.644 [2024-06-08 21:02:09.320732] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:31.644 [2024-06-08 21:02:09.320818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173830 ] 00:06:31.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.644 [2024-06-08 21:02:09.382343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.644 [2024-06-08 21:02:09.445112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=0x1 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=copy 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=software 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=32 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=32 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=1 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val=Yes 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.644 21:02:09 -- accel/accel.sh@21 -- # val= 00:06:31.644 21:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.644 21:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@21 -- # val= 00:06:32.587 21:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:32.587 21:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:32.587 21:02:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.587 21:02:10 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:32.587 21:02:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.587 00:06:32.587 real 0m2.559s 00:06:32.587 user 0m2.358s 00:06:32.587 sys 0m0.195s 00:06:32.587 21:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.587 21:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.587 ************************************ 00:06:32.587 END TEST accel_copy 00:06:32.587 ************************************ 00:06:32.587 21:02:10 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.587 21:02:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:32.587 21:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.587 21:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.587 ************************************ 00:06:32.587 START TEST accel_fill 00:06:32.587 ************************************ 00:06:32.587 21:02:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.587 21:02:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.587 21:02:10 -- accel/accel.sh@17 -- # local accel_module 00:06:32.587 21:02:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.587 21:02:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.587 21:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.587 21:02:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.587 21:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.587 21:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.587 21:02:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.587 21:02:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.587 21:02:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.587 21:02:10 -- accel/accel.sh@42 -- # jq -r . 00:06:32.587 [2024-06-08 21:02:10.633214] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.587 [2024-06-08 21:02:10.633292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174182 ] 00:06:32.587 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.848 [2024-06-08 21:02:10.693507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.848 [2024-06-08 21:02:10.756613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.791 21:02:11 -- accel/accel.sh@18 -- # out=' 00:06:33.791 SPDK Configuration: 00:06:33.791 Core mask: 0x1 00:06:33.791 00:06:33.791 Accel Perf Configuration: 00:06:33.791 Workload Type: fill 00:06:33.791 Fill pattern: 0x80 00:06:33.791 Transfer size: 4096 bytes 00:06:33.791 Vector count 1 00:06:33.791 Module: software 00:06:33.791 Queue depth: 64 00:06:33.791 Allocate depth: 64 00:06:33.791 # threads/core: 1 00:06:33.791 Run time: 1 seconds 00:06:33.791 Verify: Yes 00:06:33.791 00:06:33.791 Running for 1 seconds... 00:06:33.791 00:06:33.791 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.791 ------------------------------------------------------------------------------------ 00:06:33.791 0,0 467136/s 1824 MiB/s 0 0 00:06:33.791 ==================================================================================== 00:06:33.791 Total 467136/s 1824 MiB/s 0 0' 00:06:33.791 21:02:11 -- accel/accel.sh@20 -- # IFS=: 00:06:33.791 21:02:11 -- accel/accel.sh@20 -- # read -r var val 00:06:33.791 21:02:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.052 21:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.052 21:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.052 21:02:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.052 21:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.052 21:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.052 21:02:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.052 21:02:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.052 21:02:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.052 21:02:11 -- accel/accel.sh@42 -- # jq -r . 00:06:34.052 [2024-06-08 21:02:11.907583] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.052 [2024-06-08 21:02:11.907655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174340 ] 00:06:34.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.052 [2024-06-08 21:02:11.967644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.052 [2024-06-08 21:02:12.030315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=0x1 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=fill 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=0x80 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=software 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=64 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=64 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=1 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val=Yes 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.052 21:02:12 -- accel/accel.sh@21 -- # val= 00:06:34.052 21:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.052 21:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.437 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.437 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.437 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.437 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.437 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.437 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.437 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.437 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.437 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.438 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.438 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.438 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.438 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.438 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.438 21:02:13 -- accel/accel.sh@21 -- # val= 00:06:35.438 21:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:35.438 21:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:35.438 21:02:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.438 21:02:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:35.438 21:02:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.438 00:06:35.438 real 0m2.547s 00:06:35.438 user 0m2.354s 00:06:35.438 sys 0m0.189s 00:06:35.438 21:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.438 21:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:35.438 ************************************ 00:06:35.438 END TEST accel_fill 00:06:35.438 ************************************ 00:06:35.438 21:02:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:35.438 21:02:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:35.438 21:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.438 21:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:35.438 ************************************ 00:06:35.438 START TEST accel_copy_crc32c 00:06:35.438 ************************************ 00:06:35.438 21:02:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:35.438 21:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.438 21:02:13 -- accel/accel.sh@17 -- # local accel_module 00:06:35.438 21:02:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:35.438 21:02:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:35.438 21:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.438 21:02:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.438 21:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.438 21:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.438 21:02:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.438 21:02:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.438 21:02:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.438 21:02:13 -- accel/accel.sh@42 -- # jq -r . 00:06:35.438 [2024-06-08 21:02:13.217040] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:35.438 [2024-06-08 21:02:13.217112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174551 ] 00:06:35.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.438 [2024-06-08 21:02:13.277491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.438 [2024-06-08 21:02:13.340518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.379 21:02:14 -- accel/accel.sh@18 -- # out=' 00:06:36.379 SPDK Configuration: 00:06:36.379 Core mask: 0x1 00:06:36.379 00:06:36.379 Accel Perf Configuration: 00:06:36.379 Workload Type: copy_crc32c 00:06:36.379 CRC-32C seed: 0 00:06:36.379 Vector size: 4096 bytes 00:06:36.379 Transfer size: 4096 bytes 00:06:36.379 Vector count 1 00:06:36.379 Module: software 00:06:36.379 Queue depth: 32 00:06:36.379 Allocate depth: 32 00:06:36.379 # threads/core: 1 00:06:36.379 Run time: 1 seconds 00:06:36.379 Verify: Yes 00:06:36.379 00:06:36.379 Running for 1 seconds... 00:06:36.379 00:06:36.379 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.379 ------------------------------------------------------------------------------------ 00:06:36.379 0,0 248832/s 972 MiB/s 0 0 00:06:36.379 ==================================================================================== 00:06:36.379 Total 248832/s 972 MiB/s 0 0' 00:06:36.379 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.379 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.379 21:02:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.379 21:02:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.379 21:02:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.379 21:02:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.379 21:02:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.379 21:02:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.379 21:02:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.379 21:02:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.379 21:02:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.379 21:02:14 -- accel/accel.sh@42 -- # jq -r . 00:06:36.640 [2024-06-08 21:02:14.491497] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.640 [2024-06-08 21:02:14.491571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174891 ] 00:06:36.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.640 [2024-06-08 21:02:14.551351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.640 [2024-06-08 21:02:14.613004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=0x1 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=0 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=software 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=32 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=32 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=1 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val=Yes 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:36.640 21:02:14 -- accel/accel.sh@21 -- # val= 00:06:36.640 21:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:36.640 21:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@21 -- # val= 00:06:38.025 21:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.025 21:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.025 21:02:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.025 21:02:15 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:38.025 21:02:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.025 00:06:38.025 real 0m2.546s 00:06:38.025 user 0m2.351s 00:06:38.025 sys 0m0.190s 00:06:38.025 21:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.025 21:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:38.025 ************************************ 00:06:38.025 END TEST accel_copy_crc32c 00:06:38.025 ************************************ 00:06:38.026 21:02:15 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.026 21:02:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:38.026 21:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.026 21:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:38.026 ************************************ 00:06:38.026 START TEST accel_copy_crc32c_C2 00:06:38.026 ************************************ 00:06:38.026 21:02:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.026 21:02:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.026 21:02:15 -- accel/accel.sh@17 -- # local accel_module 00:06:38.026 21:02:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:38.026 21:02:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:38.026 21:02:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.026 21:02:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.026 21:02:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.026 21:02:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.026 21:02:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.026 21:02:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.026 21:02:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.026 21:02:15 -- accel/accel.sh@42 -- # jq -r . 00:06:38.026 [2024-06-08 21:02:15.801271] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.026 [2024-06-08 21:02:15.801393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175242 ] 00:06:38.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.026 [2024-06-08 21:02:15.871470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.026 [2024-06-08 21:02:15.936206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.411 21:02:17 -- accel/accel.sh@18 -- # out=' 00:06:39.411 SPDK Configuration: 00:06:39.411 Core mask: 0x1 00:06:39.411 00:06:39.411 Accel Perf Configuration: 00:06:39.411 Workload Type: copy_crc32c 00:06:39.411 CRC-32C seed: 0 00:06:39.411 Vector size: 4096 bytes 00:06:39.411 Transfer size: 8192 bytes 00:06:39.411 Vector count 2 00:06:39.411 Module: software 00:06:39.411 Queue depth: 32 00:06:39.411 Allocate depth: 32 00:06:39.411 # threads/core: 1 00:06:39.411 Run time: 1 seconds 00:06:39.411 Verify: Yes 00:06:39.411 00:06:39.411 Running for 1 seconds... 00:06:39.411 00:06:39.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.411 ------------------------------------------------------------------------------------ 00:06:39.411 0,0 187808/s 1467 MiB/s 0 0 00:06:39.411 ==================================================================================== 00:06:39.411 Total 187808/s 733 MiB/s 0 0' 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:39.411 21:02:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:39.411 21:02:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.411 21:02:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.411 21:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.411 21:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.411 21:02:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.411 21:02:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.411 21:02:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.411 21:02:17 -- accel/accel.sh@42 -- # jq -r . 00:06:39.411 [2024-06-08 21:02:17.086097] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.411 [2024-06-08 21:02:17.086170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175472 ] 00:06:39.411 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.411 [2024-06-08 21:02:17.146084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.411 [2024-06-08 21:02:17.207766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=0x1 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=0 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=software 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=32 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=32 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=1 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val=Yes 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:39.411 21:02:17 -- accel/accel.sh@21 -- # val= 00:06:39.411 21:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:39.411 21:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@21 -- # val= 00:06:40.353 21:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.353 21:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.353 21:02:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.353 21:02:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:40.353 21:02:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.353 00:06:40.353 real 0m2.559s 00:06:40.353 user 0m2.352s 00:06:40.353 sys 0m0.203s 00:06:40.353 21:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.353 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.353 ************************************ 00:06:40.353 END TEST accel_copy_crc32c_C2 00:06:40.353 ************************************ 00:06:40.353 21:02:18 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:40.353 21:02:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:40.353 21:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.353 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.353 ************************************ 00:06:40.353 START TEST accel_dualcast 00:06:40.353 ************************************ 00:06:40.353 21:02:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:40.353 21:02:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.353 21:02:18 -- accel/accel.sh@17 -- # local accel_module 00:06:40.353 21:02:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:40.353 21:02:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.353 21:02:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.353 21:02:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.353 21:02:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.353 21:02:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.353 21:02:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.353 21:02:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.353 21:02:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.353 21:02:18 -- accel/accel.sh@42 -- # jq -r . 00:06:40.353 [2024-06-08 21:02:18.396039] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.353 [2024-06-08 21:02:18.396132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175636 ] 00:06:40.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.614 [2024-06-08 21:02:18.456173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.614 [2024-06-08 21:02:18.518859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.556 21:02:19 -- accel/accel.sh@18 -- # out=' 00:06:41.556 SPDK Configuration: 00:06:41.556 Core mask: 0x1 00:06:41.556 00:06:41.556 Accel Perf Configuration: 00:06:41.556 Workload Type: dualcast 00:06:41.556 Transfer size: 4096 bytes 00:06:41.556 Vector count 1 00:06:41.556 Module: software 00:06:41.556 Queue depth: 32 00:06:41.556 Allocate depth: 32 00:06:41.556 # threads/core: 1 00:06:41.556 Run time: 1 seconds 00:06:41.556 Verify: Yes 00:06:41.556 00:06:41.556 Running for 1 seconds... 00:06:41.556 00:06:41.556 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.556 ------------------------------------------------------------------------------------ 00:06:41.556 0,0 361504/s 1412 MiB/s 0 0 00:06:41.556 ==================================================================================== 00:06:41.556 Total 361504/s 1412 MiB/s 0 0' 00:06:41.556 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.556 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.556 21:02:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:41.556 21:02:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:41.556 21:02:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.556 21:02:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.556 21:02:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.556 21:02:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.556 21:02:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.556 21:02:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.556 21:02:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.556 21:02:19 -- accel/accel.sh@42 -- # jq -r . 00:06:41.817 [2024-06-08 21:02:19.668543] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:41.817 [2024-06-08 21:02:19.668618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175949 ] 00:06:41.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.817 [2024-06-08 21:02:19.728166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.817 [2024-06-08 21:02:19.789715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=0x1 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=dualcast 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=software 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=32 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=32 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=1 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val=Yes 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:41.817 21:02:19 -- accel/accel.sh@21 -- # val= 00:06:41.817 21:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:41.817 21:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.203 21:02:20 -- accel/accel.sh@21 -- # val= 00:06:43.203 21:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.203 21:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.204 21:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.204 21:02:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.204 21:02:20 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:43.204 21:02:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.204 00:06:43.204 real 0m2.544s 00:06:43.204 user 0m2.340s 00:06:43.204 sys 0m0.199s 00:06:43.204 21:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.204 21:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.204 ************************************ 00:06:43.204 END TEST accel_dualcast 00:06:43.204 ************************************ 00:06:43.204 21:02:20 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:43.204 21:02:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:43.204 21:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.204 21:02:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.204 ************************************ 00:06:43.204 START TEST accel_compare 00:06:43.204 ************************************ 00:06:43.204 21:02:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:43.204 21:02:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.204 21:02:20 -- accel/accel.sh@17 -- # local accel_module 00:06:43.204 21:02:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:43.204 21:02:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.204 21:02:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.204 21:02:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.204 21:02:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.204 21:02:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.204 21:02:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.204 21:02:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.204 21:02:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.204 21:02:20 -- accel/accel.sh@42 -- # jq -r . 00:06:43.204 [2024-06-08 21:02:20.977905] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:43.204 [2024-06-08 21:02:20.977989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176300 ] 00:06:43.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.204 [2024-06-08 21:02:21.047319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.204 [2024-06-08 21:02:21.110441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.145 21:02:22 -- accel/accel.sh@18 -- # out=' 00:06:44.145 SPDK Configuration: 00:06:44.145 Core mask: 0x1 00:06:44.145 00:06:44.145 Accel Perf Configuration: 00:06:44.145 Workload Type: compare 00:06:44.145 Transfer size: 4096 bytes 00:06:44.145 Vector count 1 00:06:44.145 Module: software 00:06:44.145 Queue depth: 32 00:06:44.145 Allocate depth: 32 00:06:44.145 # threads/core: 1 00:06:44.145 Run time: 1 seconds 00:06:44.145 Verify: Yes 00:06:44.145 00:06:44.145 Running for 1 seconds... 00:06:44.145 00:06:44.145 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.145 ------------------------------------------------------------------------------------ 00:06:44.145 0,0 433120/s 1691 MiB/s 0 0 00:06:44.145 ==================================================================================== 00:06:44.145 Total 433120/s 1691 MiB/s 0 0' 00:06:44.145 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.145 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.145 21:02:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.145 21:02:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.407 21:02:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.407 21:02:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.407 21:02:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.407 21:02:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.407 21:02:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.407 21:02:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.407 21:02:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.407 21:02:22 -- accel/accel.sh@42 -- # jq -r . 00:06:44.407 [2024-06-08 21:02:22.260857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.407 [2024-06-08 21:02:22.260937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176597 ] 00:06:44.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.407 [2024-06-08 21:02:22.320636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.407 [2024-06-08 21:02:22.381854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=0x1 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=compare 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=software 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=32 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=32 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=1 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val=Yes 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:44.407 21:02:22 -- accel/accel.sh@21 -- # val= 00:06:44.407 21:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:44.407 21:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@21 -- # val= 00:06:45.791 21:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # IFS=: 00:06:45.791 21:02:23 -- accel/accel.sh@20 -- # read -r var val 00:06:45.791 21:02:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.791 21:02:23 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:45.791 21:02:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.791 00:06:45.791 real 0m2.555s 00:06:45.791 user 0m2.341s 00:06:45.791 sys 0m0.208s 00:06:45.791 21:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.791 21:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.791 ************************************ 00:06:45.791 END TEST accel_compare 00:06:45.791 ************************************ 00:06:45.792 21:02:23 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:45.792 21:02:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.792 21:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.792 21:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.792 ************************************ 00:06:45.792 START TEST accel_xor 00:06:45.792 ************************************ 00:06:45.792 21:02:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:45.792 21:02:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.792 21:02:23 -- accel/accel.sh@17 -- # local accel_module 00:06:45.792 21:02:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:45.792 21:02:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:45.792 21:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.792 21:02:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.792 21:02:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.792 21:02:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.792 21:02:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.792 21:02:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.792 21:02:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.792 21:02:23 -- accel/accel.sh@42 -- # jq -r . 00:06:45.792 [2024-06-08 21:02:23.569636] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.792 [2024-06-08 21:02:23.569708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176766 ] 00:06:45.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.792 [2024-06-08 21:02:23.629684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.792 [2024-06-08 21:02:23.693850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.732 21:02:24 -- accel/accel.sh@18 -- # out=' 00:06:46.733 SPDK Configuration: 00:06:46.733 Core mask: 0x1 00:06:46.733 00:06:46.733 Accel Perf Configuration: 00:06:46.733 Workload Type: xor 00:06:46.733 Source buffers: 2 00:06:46.733 Transfer size: 4096 bytes 00:06:46.733 Vector count 1 00:06:46.733 Module: software 00:06:46.733 Queue depth: 32 00:06:46.733 Allocate depth: 32 00:06:46.733 # threads/core: 1 00:06:46.733 Run time: 1 seconds 00:06:46.733 Verify: Yes 00:06:46.733 00:06:46.733 Running for 1 seconds... 00:06:46.733 00:06:46.733 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.733 ------------------------------------------------------------------------------------ 00:06:46.733 0,0 359904/s 1405 MiB/s 0 0 00:06:46.733 ==================================================================================== 00:06:46.733 Total 359904/s 1405 MiB/s 0 0' 00:06:46.733 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.733 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.733 21:02:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:46.733 21:02:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.733 21:02:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.733 21:02:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.733 21:02:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.733 21:02:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.733 21:02:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.733 21:02:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.733 21:02:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.733 21:02:24 -- accel/accel.sh@42 -- # jq -r . 00:06:46.993 [2024-06-08 21:02:24.844376] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.994 [2024-06-08 21:02:24.844455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177011 ] 00:06:46.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.994 [2024-06-08 21:02:24.904092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.994 [2024-06-08 21:02:24.966628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=0x1 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=xor 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=2 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=software 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=32 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=32 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=1 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val=Yes 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:24 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:46.994 21:02:25 -- accel/accel.sh@21 -- # val= 00:06:46.994 21:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.994 21:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:46.994 21:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@21 -- # val= 00:06:48.388 21:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:48.388 21:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:48.388 21:02:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.388 21:02:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:48.388 21:02:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.388 00:06:48.388 real 0m2.547s 00:06:48.388 user 0m2.353s 00:06:48.388 sys 0m0.190s 00:06:48.388 21:02:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.388 21:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:48.388 ************************************ 00:06:48.388 END TEST accel_xor 00:06:48.388 ************************************ 00:06:48.388 21:02:26 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.388 21:02:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:48.388 21:02:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.388 21:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:48.388 ************************************ 00:06:48.388 START TEST accel_xor 00:06:48.388 ************************************ 00:06:48.388 21:02:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.388 21:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.388 21:02:26 -- accel/accel.sh@17 -- # local accel_module 00:06:48.388 21:02:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.388 21:02:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.388 21:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.388 21:02:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.388 21:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.388 21:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.388 21:02:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.388 21:02:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.388 21:02:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.388 21:02:26 -- accel/accel.sh@42 -- # jq -r . 00:06:48.388 [2024-06-08 21:02:26.158081] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:48.388 [2024-06-08 21:02:26.158157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177360 ] 00:06:48.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.388 [2024-06-08 21:02:26.219321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.388 [2024-06-08 21:02:26.283939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.392 21:02:27 -- accel/accel.sh@18 -- # out=' 00:06:49.392 SPDK Configuration: 00:06:49.392 Core mask: 0x1 00:06:49.392 00:06:49.392 Accel Perf Configuration: 00:06:49.392 Workload Type: xor 00:06:49.392 Source buffers: 3 00:06:49.392 Transfer size: 4096 bytes 00:06:49.392 Vector count 1 00:06:49.392 Module: software 00:06:49.392 Queue depth: 32 00:06:49.392 Allocate depth: 32 00:06:49.392 # threads/core: 1 00:06:49.392 Run time: 1 seconds 00:06:49.392 Verify: Yes 00:06:49.392 00:06:49.392 Running for 1 seconds... 00:06:49.392 00:06:49.392 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.392 ------------------------------------------------------------------------------------ 00:06:49.393 0,0 342784/s 1339 MiB/s 0 0 00:06:49.393 ==================================================================================== 00:06:49.393 Total 342784/s 1339 MiB/s 0 0' 00:06:49.393 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.393 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.393 21:02:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.393 21:02:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.393 21:02:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.393 21:02:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.393 21:02:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.393 21:02:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.393 21:02:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.393 21:02:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.393 21:02:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.393 21:02:27 -- accel/accel.sh@42 -- # jq -r . 00:06:49.393 [2024-06-08 21:02:27.435773] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:49.393 [2024-06-08 21:02:27.435839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177698 ] 00:06:49.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.653 [2024-06-08 21:02:27.495071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.653 [2024-06-08 21:02:27.556911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=0x1 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=xor 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=3 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=software 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=32 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=32 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=1 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val=Yes 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.653 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:49.653 21:02:27 -- accel/accel.sh@21 -- # val= 00:06:49.653 21:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.654 21:02:27 -- accel/accel.sh@20 -- # IFS=: 00:06:49.654 21:02:27 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@21 -- # val= 00:06:50.595 21:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.595 21:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.595 21:02:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.595 21:02:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.595 21:02:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.595 00:06:50.595 real 0m2.549s 00:06:50.595 user 0m2.349s 00:06:50.595 sys 0m0.195s 00:06:50.595 21:02:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.595 21:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.595 ************************************ 00:06:50.595 END TEST accel_xor 00:06:50.595 ************************************ 00:06:50.854 21:02:28 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.854 21:02:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:50.854 21:02:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.854 21:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.854 ************************************ 00:06:50.854 START TEST accel_dif_verify 00:06:50.854 ************************************ 00:06:50.854 21:02:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:50.854 21:02:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.855 21:02:28 -- accel/accel.sh@17 -- # local accel_module 00:06:50.855 21:02:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:50.855 21:02:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.855 21:02:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.855 21:02:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.855 21:02:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.855 21:02:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.855 21:02:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.855 21:02:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.855 21:02:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.855 21:02:28 -- accel/accel.sh@42 -- # jq -r . 00:06:50.855 [2024-06-08 21:02:28.747037] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.855 [2024-06-08 21:02:28.747146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177898 ] 00:06:50.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.855 [2024-06-08 21:02:28.809005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.855 [2024-06-08 21:02:28.872752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.237 21:02:29 -- accel/accel.sh@18 -- # out=' 00:06:52.237 SPDK Configuration: 00:06:52.237 Core mask: 0x1 00:06:52.237 00:06:52.237 Accel Perf Configuration: 00:06:52.237 Workload Type: dif_verify 00:06:52.237 Vector size: 4096 bytes 00:06:52.237 Transfer size: 4096 bytes 00:06:52.237 Block size: 512 bytes 00:06:52.237 Metadata size: 8 bytes 00:06:52.237 Vector count 1 00:06:52.237 Module: software 00:06:52.237 Queue depth: 32 00:06:52.237 Allocate depth: 32 00:06:52.237 # threads/core: 1 00:06:52.237 Run time: 1 seconds 00:06:52.237 Verify: No 00:06:52.237 00:06:52.237 Running for 1 seconds... 00:06:52.237 00:06:52.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.237 ------------------------------------------------------------------------------------ 00:06:52.237 0,0 95104/s 377 MiB/s 0 0 00:06:52.237 ==================================================================================== 00:06:52.237 Total 95104/s 371 MiB/s 0 0' 00:06:52.237 21:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:52.237 21:02:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.237 21:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.237 21:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.237 21:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.237 21:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.237 21:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.237 21:02:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.237 21:02:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.237 21:02:30 -- accel/accel.sh@42 -- # jq -r . 00:06:52.237 [2024-06-08 21:02:30.023931] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:52.237 [2024-06-08 21:02:30.024004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178072 ] 00:06:52.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.237 [2024-06-08 21:02:30.085535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.237 [2024-06-08 21:02:30.147602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=0x1 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=dif_verify 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=software 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=32 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=32 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=1 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.237 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.237 21:02:30 -- accel/accel.sh@21 -- # val=No 00:06:52.237 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.238 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:52.238 21:02:30 -- accel/accel.sh@21 -- # val= 00:06:52.238 21:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:52.238 21:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@21 -- # val= 00:06:53.621 21:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:53.621 21:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:53.621 21:02:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.621 21:02:31 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:53.621 21:02:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.621 00:06:53.621 real 0m2.554s 00:06:53.621 user 0m2.352s 00:06:53.621 sys 0m0.197s 00:06:53.621 21:02:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.621 21:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:53.621 ************************************ 00:06:53.621 END TEST accel_dif_verify 00:06:53.621 ************************************ 00:06:53.621 21:02:31 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:53.621 21:02:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:53.621 21:02:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.621 21:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:53.621 ************************************ 00:06:53.621 START TEST accel_dif_generate 00:06:53.621 ************************************ 00:06:53.621 21:02:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:53.621 21:02:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.621 21:02:31 -- accel/accel.sh@17 -- # local accel_module 00:06:53.621 21:02:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:53.621 21:02:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:53.621 21:02:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.621 21:02:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.621 21:02:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.621 21:02:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.621 21:02:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.621 21:02:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.621 21:02:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.621 21:02:31 -- accel/accel.sh@42 -- # jq -r . 00:06:53.621 [2024-06-08 21:02:31.335459] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.621 [2024-06-08 21:02:31.335535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178424 ] 00:06:53.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.621 [2024-06-08 21:02:31.394892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.621 [2024-06-08 21:02:31.457230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.563 21:02:32 -- accel/accel.sh@18 -- # out=' 00:06:54.563 SPDK Configuration: 00:06:54.563 Core mask: 0x1 00:06:54.563 00:06:54.563 Accel Perf Configuration: 00:06:54.563 Workload Type: dif_generate 00:06:54.563 Vector size: 4096 bytes 00:06:54.563 Transfer size: 4096 bytes 00:06:54.563 Block size: 512 bytes 00:06:54.563 Metadata size: 8 bytes 00:06:54.563 Vector count 1 00:06:54.563 Module: software 00:06:54.563 Queue depth: 32 00:06:54.563 Allocate depth: 32 00:06:54.563 # threads/core: 1 00:06:54.563 Run time: 1 seconds 00:06:54.563 Verify: No 00:06:54.563 00:06:54.563 Running for 1 seconds... 00:06:54.563 00:06:54.563 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.563 ------------------------------------------------------------------------------------ 00:06:54.563 0,0 114592/s 454 MiB/s 0 0 00:06:54.563 ==================================================================================== 00:06:54.563 Total 114592/s 447 MiB/s 0 0' 00:06:54.563 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.563 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.563 21:02:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:54.563 21:02:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:54.563 21:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.563 21:02:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.563 21:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.563 21:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.563 21:02:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.563 21:02:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.563 21:02:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.563 21:02:32 -- accel/accel.sh@42 -- # jq -r . 00:06:54.563 [2024-06-08 21:02:32.609601] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.563 [2024-06-08 21:02:32.609701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178760 ] 00:06:54.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.823 [2024-06-08 21:02:32.670087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.823 [2024-06-08 21:02:32.732221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val=0x1 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val=dif_generate 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.823 21:02:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.823 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.823 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val=software 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val=32 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val=32 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val=1 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val=No 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:54.824 21:02:32 -- accel/accel.sh@21 -- # val= 00:06:54.824 21:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:54.824 21:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:55.765 21:02:33 -- accel/accel.sh@21 -- # val= 00:06:55.765 21:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:55.765 21:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.028 21:02:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.028 21:02:33 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:56.028 21:02:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.028 00:06:56.028 real 0m2.547s 00:06:56.028 user 0m2.336s 00:06:56.028 sys 0m0.207s 00:06:56.028 21:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.028 21:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.028 ************************************ 00:06:56.028 END TEST accel_dif_generate 00:06:56.028 ************************************ 00:06:56.028 21:02:33 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:56.028 21:02:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:56.028 21:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.028 21:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.028 ************************************ 00:06:56.028 START TEST accel_dif_generate_copy 00:06:56.028 ************************************ 00:06:56.028 21:02:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:56.028 21:02:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.028 21:02:33 -- accel/accel.sh@17 -- # local accel_module 00:06:56.028 21:02:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:56.028 21:02:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:56.028 21:02:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.028 21:02:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.028 21:02:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.028 21:02:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.028 21:02:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.028 21:02:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.028 21:02:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.028 21:02:33 -- accel/accel.sh@42 -- # jq -r . 00:06:56.028 [2024-06-08 21:02:33.922438] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.028 [2024-06-08 21:02:33.922546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179010 ] 00:06:56.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.028 [2024-06-08 21:02:33.983244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.028 [2024-06-08 21:02:34.048047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.413 21:02:35 -- accel/accel.sh@18 -- # out=' 00:06:57.413 SPDK Configuration: 00:06:57.413 Core mask: 0x1 00:06:57.413 00:06:57.413 Accel Perf Configuration: 00:06:57.413 Workload Type: dif_generate_copy 00:06:57.413 Vector size: 4096 bytes 00:06:57.413 Transfer size: 4096 bytes 00:06:57.413 Vector count 1 00:06:57.413 Module: software 00:06:57.413 Queue depth: 32 00:06:57.413 Allocate depth: 32 00:06:57.413 # threads/core: 1 00:06:57.413 Run time: 1 seconds 00:06:57.413 Verify: No 00:06:57.413 00:06:57.413 Running for 1 seconds... 00:06:57.413 00:06:57.413 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.413 ------------------------------------------------------------------------------------ 00:06:57.413 0,0 87616/s 347 MiB/s 0 0 00:06:57.413 ==================================================================================== 00:06:57.413 Total 87616/s 342 MiB/s 0 0' 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.413 21:02:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.413 21:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.413 21:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.413 21:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.413 21:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.413 21:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.413 21:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.413 21:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.413 21:02:35 -- accel/accel.sh@42 -- # jq -r . 00:06:57.413 [2024-06-08 21:02:35.199956] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:57.413 [2024-06-08 21:02:35.200053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179153 ] 00:06:57.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.413 [2024-06-08 21:02:35.261903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.413 [2024-06-08 21:02:35.323842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=0x1 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=software 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=32 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=32 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=1 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val=No 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 21:02:35 -- accel/accel.sh@21 -- # val= 00:06:57.413 21:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 21:02:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@21 -- # val= 00:06:58.796 21:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:58.796 21:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:58.796 21:02:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.796 21:02:36 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:58.796 21:02:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.796 00:06:58.796 real 0m2.553s 00:06:58.796 user 0m2.346s 00:06:58.796 sys 0m0.202s 00:06:58.796 21:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.796 21:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:58.796 ************************************ 00:06:58.796 END TEST accel_dif_generate_copy 00:06:58.796 ************************************ 00:06:58.796 21:02:36 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:58.796 21:02:36 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.796 21:02:36 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:58.796 21:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.796 21:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:58.796 ************************************ 00:06:58.796 START TEST accel_comp 00:06:58.796 ************************************ 00:06:58.796 21:02:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.796 21:02:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.796 21:02:36 -- accel/accel.sh@17 -- # local accel_module 00:06:58.796 21:02:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.796 21:02:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.796 21:02:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.796 21:02:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.796 21:02:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.796 21:02:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.796 21:02:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.796 21:02:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.796 21:02:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.796 21:02:36 -- accel/accel.sh@42 -- # jq -r . 00:06:58.796 [2024-06-08 21:02:36.513094] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.796 [2024-06-08 21:02:36.513183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179482 ] 00:06:58.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.796 [2024-06-08 21:02:36.583860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.796 [2024-06-08 21:02:36.648528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.739 21:02:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.739 00:06:59.739 SPDK Configuration: 00:06:59.739 Core mask: 0x1 00:06:59.739 00:06:59.739 Accel Perf Configuration: 00:06:59.739 Workload Type: compress 00:06:59.739 Transfer size: 4096 bytes 00:06:59.739 Vector count 1 00:06:59.739 Module: software 00:06:59.739 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.739 Queue depth: 32 00:06:59.739 Allocate depth: 32 00:06:59.739 # threads/core: 1 00:06:59.739 Run time: 1 seconds 00:06:59.739 Verify: No 00:06:59.739 00:06:59.739 Running for 1 seconds... 00:06:59.739 00:06:59.739 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.739 ------------------------------------------------------------------------------------ 00:06:59.739 0,0 47392/s 197 MiB/s 0 0 00:06:59.739 ==================================================================================== 00:06:59.739 Total 47392/s 185 MiB/s 0 0' 00:06:59.739 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:59.739 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:59.739 21:02:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.739 21:02:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.739 21:02:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.739 21:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.739 21:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.739 21:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.739 21:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.739 21:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.739 21:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.739 21:02:37 -- accel/accel.sh@42 -- # jq -r . 00:06:59.739 [2024-06-08 21:02:37.803889] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.739 [2024-06-08 21:02:37.803988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179819 ] 00:07:00.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.000 [2024-06-08 21:02:37.865085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.000 [2024-06-08 21:02:37.926495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=0x1 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=compress 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=software 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=32 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=32 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=1 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val=No 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.000 21:02:37 -- accel/accel.sh@21 -- # val= 00:07:00.000 21:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.000 21:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@21 -- # val= 00:07:01.385 21:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:01.385 21:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:01.385 21:02:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.385 21:02:39 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:01.385 21:02:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.385 00:07:01.385 real 0m2.569s 00:07:01.385 user 0m2.365s 00:07:01.385 sys 0m0.198s 00:07:01.385 21:02:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.385 21:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.385 ************************************ 00:07:01.385 END TEST accel_comp 00:07:01.385 ************************************ 00:07:01.385 21:02:39 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.385 21:02:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:01.385 21:02:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.385 21:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.385 ************************************ 00:07:01.385 START TEST accel_decomp 00:07:01.385 ************************************ 00:07:01.385 21:02:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.385 21:02:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.385 21:02:39 -- accel/accel.sh@17 -- # local accel_module 00:07:01.385 21:02:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.385 21:02:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.385 21:02:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.385 21:02:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.385 21:02:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.385 21:02:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.385 21:02:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.385 21:02:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.385 21:02:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.385 21:02:39 -- accel/accel.sh@42 -- # jq -r . 00:07:01.385 [2024-06-08 21:02:39.120845] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.385 [2024-06-08 21:02:39.120915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180155 ] 00:07:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.385 [2024-06-08 21:02:39.180718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.385 [2024-06-08 21:02:39.243711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.328 21:02:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.328 00:07:02.328 SPDK Configuration: 00:07:02.328 Core mask: 0x1 00:07:02.328 00:07:02.328 Accel Perf Configuration: 00:07:02.328 Workload Type: decompress 00:07:02.328 Transfer size: 4096 bytes 00:07:02.328 Vector count 1 00:07:02.328 Module: software 00:07:02.328 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.328 Queue depth: 32 00:07:02.328 Allocate depth: 32 00:07:02.328 # threads/core: 1 00:07:02.328 Run time: 1 seconds 00:07:02.328 Verify: Yes 00:07:02.328 00:07:02.328 Running for 1 seconds... 00:07:02.328 00:07:02.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.328 ------------------------------------------------------------------------------------ 00:07:02.328 0,0 63008/s 116 MiB/s 0 0 00:07:02.328 ==================================================================================== 00:07:02.328 Total 63008/s 246 MiB/s 0 0' 00:07:02.328 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.328 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.328 21:02:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.328 21:02:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.328 21:02:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.328 21:02:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.328 21:02:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.328 21:02:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.328 21:02:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.328 21:02:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.328 21:02:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.328 21:02:40 -- accel/accel.sh@42 -- # jq -r . 00:07:02.328 [2024-06-08 21:02:40.397635] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:02.328 [2024-06-08 21:02:40.397707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180288 ] 00:07:02.590 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.590 [2024-06-08 21:02:40.456925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.590 [2024-06-08 21:02:40.519079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=0x1 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=decompress 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=software 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=32 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=32 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=1 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val=Yes 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:02.590 21:02:40 -- accel/accel.sh@21 -- # val= 00:07:02.590 21:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # IFS=: 00:07:02.590 21:02:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@21 -- # val= 00:07:03.977 21:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.977 21:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.977 21:02:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.977 21:02:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.977 21:02:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.977 00:07:03.977 real 0m2.559s 00:07:03.977 user 0m2.363s 00:07:03.977 sys 0m0.203s 00:07:03.977 21:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.977 21:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:03.977 ************************************ 00:07:03.977 END TEST accel_decomp 00:07:03.977 ************************************ 00:07:03.977 21:02:41 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.977 21:02:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:03.977 21:02:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.977 21:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:03.977 ************************************ 00:07:03.977 START TEST accel_decmop_full 00:07:03.977 ************************************ 00:07:03.977 21:02:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.977 21:02:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.977 21:02:41 -- accel/accel.sh@17 -- # local accel_module 00:07:03.977 21:02:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.977 21:02:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.977 21:02:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.977 21:02:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.977 21:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.977 21:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.977 21:02:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.977 21:02:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.977 21:02:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.977 21:02:41 -- accel/accel.sh@42 -- # jq -r . 00:07:03.977 [2024-06-08 21:02:41.722129] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.977 [2024-06-08 21:02:41.722234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180540 ] 00:07:03.977 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.977 [2024-06-08 21:02:41.783889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.977 [2024-06-08 21:02:41.849558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.920 21:02:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.920 00:07:04.920 SPDK Configuration: 00:07:04.920 Core mask: 0x1 00:07:04.920 00:07:04.920 Accel Perf Configuration: 00:07:04.920 Workload Type: decompress 00:07:04.920 Transfer size: 111250 bytes 00:07:04.920 Vector count 1 00:07:04.920 Module: software 00:07:04.920 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.920 Queue depth: 32 00:07:04.920 Allocate depth: 32 00:07:04.920 # threads/core: 1 00:07:04.920 Run time: 1 seconds 00:07:04.920 Verify: Yes 00:07:04.920 00:07:04.920 Running for 1 seconds... 00:07:04.920 00:07:04.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.920 ------------------------------------------------------------------------------------ 00:07:04.920 0,0 4064/s 167 MiB/s 0 0 00:07:04.920 ==================================================================================== 00:07:04.920 Total 4064/s 431 MiB/s 0 0' 00:07:04.920 21:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:02:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.920 21:02:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.920 21:02:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.920 21:02:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.920 21:02:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.921 21:02:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.921 21:02:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.921 21:02:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.921 21:02:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.921 21:02:42 -- accel/accel.sh@42 -- # jq -r . 00:07:05.181 [2024-06-08 21:02:43.013126] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:05.181 [2024-06-08 21:02:43.013200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180882 ] 00:07:05.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.181 [2024-06-08 21:02:43.089207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.181 [2024-06-08 21:02:43.151514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=0x1 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=decompress 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=software 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=32 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=32 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=1 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val=Yes 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:05.181 21:02:43 -- accel/accel.sh@21 -- # val= 00:07:05.181 21:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # IFS=: 00:07:05.181 21:02:43 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@21 -- # val= 00:07:06.627 21:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.627 21:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.627 21:02:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.627 21:02:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.627 21:02:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.627 00:07:06.627 real 0m2.599s 00:07:06.627 user 0m2.389s 00:07:06.627 sys 0m0.215s 00:07:06.627 21:02:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.627 21:02:44 -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 ************************************ 00:07:06.627 END TEST accel_decmop_full 00:07:06.627 ************************************ 00:07:06.627 21:02:44 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.627 21:02:44 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:06.627 21:02:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.627 21:02:44 -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 ************************************ 00:07:06.627 START TEST accel_decomp_mcore 00:07:06.627 ************************************ 00:07:06.627 21:02:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.628 21:02:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.628 21:02:44 -- accel/accel.sh@17 -- # local accel_module 00:07:06.628 21:02:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.628 21:02:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.628 21:02:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.628 21:02:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.628 21:02:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.628 21:02:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.628 21:02:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.628 21:02:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.628 21:02:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.628 21:02:44 -- accel/accel.sh@42 -- # jq -r . 00:07:06.628 [2024-06-08 21:02:44.362135] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.628 [2024-06-08 21:02:44.362231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181234 ] 00:07:06.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.628 [2024-06-08 21:02:44.432220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.628 [2024-06-08 21:02:44.500923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.628 [2024-06-08 21:02:44.501035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.628 [2024-06-08 21:02:44.501190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.628 [2024-06-08 21:02:44.501190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.570 21:02:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.570 00:07:07.570 SPDK Configuration: 00:07:07.570 Core mask: 0xf 00:07:07.570 00:07:07.570 Accel Perf Configuration: 00:07:07.570 Workload Type: decompress 00:07:07.570 Transfer size: 4096 bytes 00:07:07.570 Vector count 1 00:07:07.570 Module: software 00:07:07.570 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.570 Queue depth: 32 00:07:07.570 Allocate depth: 32 00:07:07.570 # threads/core: 1 00:07:07.570 Run time: 1 seconds 00:07:07.570 Verify: Yes 00:07:07.570 00:07:07.570 Running for 1 seconds... 00:07:07.570 00:07:07.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.570 ------------------------------------------------------------------------------------ 00:07:07.570 0,0 58400/s 107 MiB/s 0 0 00:07:07.570 3,0 58656/s 108 MiB/s 0 0 00:07:07.570 2,0 86336/s 159 MiB/s 0 0 00:07:07.570 1,0 58592/s 107 MiB/s 0 0 00:07:07.570 ==================================================================================== 00:07:07.570 Total 261984/s 1023 MiB/s 0 0' 00:07:07.570 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.570 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.570 21:02:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.570 21:02:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.570 21:02:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.570 21:02:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.570 21:02:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.570 21:02:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.570 21:02:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.570 21:02:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.570 21:02:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.570 21:02:45 -- accel/accel.sh@42 -- # jq -r . 00:07:07.570 [2024-06-08 21:02:45.661504] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.570 [2024-06-08 21:02:45.661578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181468 ] 00:07:07.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.831 [2024-06-08 21:02:45.722347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.831 [2024-06-08 21:02:45.787202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.831 [2024-06-08 21:02:45.787317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.831 [2024-06-08 21:02:45.787472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.831 [2024-06-08 21:02:45.787473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val=0xf 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val=decompress 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val=software 00:07:07.831 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.831 21:02:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.831 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.831 21:02:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val=32 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val=32 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val=1 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val=Yes 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:07.832 21:02:45 -- accel/accel.sh@21 -- # val= 00:07:07.832 21:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:07.832 21:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@21 -- # val= 00:07:09.217 21:02:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.217 21:02:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.217 21:02:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.217 21:02:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.217 21:02:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.217 00:07:09.217 real 0m2.591s 00:07:09.217 user 0m8.840s 00:07:09.217 sys 0m0.222s 00:07:09.217 21:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.217 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.217 ************************************ 00:07:09.217 END TEST accel_decomp_mcore 00:07:09.217 ************************************ 00:07:09.217 21:02:46 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.217 21:02:46 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:09.217 21:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.217 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:07:09.217 ************************************ 00:07:09.217 START TEST accel_decomp_full_mcore 00:07:09.217 ************************************ 00:07:09.217 21:02:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.217 21:02:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.217 21:02:46 -- accel/accel.sh@17 -- # local accel_module 00:07:09.217 21:02:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.217 21:02:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.217 21:02:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.217 21:02:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.217 21:02:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.217 21:02:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.217 21:02:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.217 21:02:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.217 21:02:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.217 21:02:46 -- accel/accel.sh@42 -- # jq -r . 00:07:09.217 [2024-06-08 21:02:46.997347] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:09.217 [2024-06-08 21:02:46.997455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181647 ] 00:07:09.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.217 [2024-06-08 21:02:47.059383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.217 [2024-06-08 21:02:47.126674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.217 [2024-06-08 21:02:47.126793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.217 [2024-06-08 21:02:47.126952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.217 [2024-06-08 21:02:47.126952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.602 21:02:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.602 00:07:10.602 SPDK Configuration: 00:07:10.602 Core mask: 0xf 00:07:10.602 00:07:10.602 Accel Perf Configuration: 00:07:10.602 Workload Type: decompress 00:07:10.602 Transfer size: 111250 bytes 00:07:10.602 Vector count 1 00:07:10.602 Module: software 00:07:10.602 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.602 Queue depth: 32 00:07:10.602 Allocate depth: 32 00:07:10.602 # threads/core: 1 00:07:10.602 Run time: 1 seconds 00:07:10.602 Verify: Yes 00:07:10.602 00:07:10.602 Running for 1 seconds... 00:07:10.602 00:07:10.602 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.602 ------------------------------------------------------------------------------------ 00:07:10.602 0,0 4096/s 169 MiB/s 0 0 00:07:10.602 3,0 4096/s 169 MiB/s 0 0 00:07:10.602 2,0 5952/s 245 MiB/s 0 0 00:07:10.602 1,0 4096/s 169 MiB/s 0 0 00:07:10.602 ==================================================================================== 00:07:10.602 Total 18240/s 1935 MiB/s 0 0' 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.602 21:02:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.602 21:02:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.602 21:02:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.602 21:02:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.602 21:02:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.602 21:02:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.602 21:02:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.602 21:02:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.602 21:02:48 -- accel/accel.sh@42 -- # jq -r . 00:07:10.602 [2024-06-08 21:02:48.304424] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:10.602 [2024-06-08 21:02:48.304532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181953 ] 00:07:10.602 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.602 [2024-06-08 21:02:48.370428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.602 [2024-06-08 21:02:48.433955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.602 [2024-06-08 21:02:48.434068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.602 [2024-06-08 21:02:48.434221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.602 [2024-06-08 21:02:48.434222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val=0xf 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val=decompress 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.602 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.602 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.602 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=software 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=32 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=32 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=1 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val=Yes 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.603 21:02:48 -- accel/accel.sh@21 -- # val= 00:07:10.603 21:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.603 21:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:11.546 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.546 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.546 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.546 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.546 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.546 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.546 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.546 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.546 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.547 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.547 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.547 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.547 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@21 -- # val= 00:07:11.547 21:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:11.547 21:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:11.547 21:02:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.547 21:02:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.547 21:02:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.547 00:07:11.547 real 0m2.617s 00:07:11.547 user 0m8.951s 00:07:11.547 sys 0m0.214s 00:07:11.547 21:02:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.547 21:02:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.547 ************************************ 00:07:11.547 END TEST accel_decomp_full_mcore 00:07:11.547 ************************************ 00:07:11.547 21:02:49 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.547 21:02:49 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:11.547 21:02:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.547 21:02:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.547 ************************************ 00:07:11.547 START TEST accel_decomp_mthread 00:07:11.547 ************************************ 00:07:11.547 21:02:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.547 21:02:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.547 21:02:49 -- accel/accel.sh@17 -- # local accel_module 00:07:11.547 21:02:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.547 21:02:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.547 21:02:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.547 21:02:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.547 21:02:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.547 21:02:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.547 21:02:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.547 21:02:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.547 21:02:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.547 21:02:49 -- accel/accel.sh@42 -- # jq -r . 00:07:11.807 [2024-06-08 21:02:49.655562] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:11.807 [2024-06-08 21:02:49.655635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182308 ] 00:07:11.807 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.807 [2024-06-08 21:02:49.715295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.807 [2024-06-08 21:02:49.778235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.191 21:02:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.191 00:07:13.191 SPDK Configuration: 00:07:13.191 Core mask: 0x1 00:07:13.191 00:07:13.191 Accel Perf Configuration: 00:07:13.191 Workload Type: decompress 00:07:13.191 Transfer size: 4096 bytes 00:07:13.191 Vector count 1 00:07:13.191 Module: software 00:07:13.191 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.191 Queue depth: 32 00:07:13.191 Allocate depth: 32 00:07:13.191 # threads/core: 2 00:07:13.191 Run time: 1 seconds 00:07:13.191 Verify: Yes 00:07:13.191 00:07:13.191 Running for 1 seconds... 00:07:13.191 00:07:13.191 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.191 ------------------------------------------------------------------------------------ 00:07:13.191 0,1 31776/s 58 MiB/s 0 0 00:07:13.191 0,0 31648/s 58 MiB/s 0 0 00:07:13.191 ==================================================================================== 00:07:13.191 Total 63424/s 247 MiB/s 0 0' 00:07:13.191 21:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.191 21:02:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.191 21:02:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.191 21:02:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.191 21:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.191 21:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.191 21:02:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.191 21:02:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.191 21:02:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.191 21:02:50 -- accel/accel.sh@42 -- # jq -r . 00:07:13.191 [2024-06-08 21:02:50.938315] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:13.191 [2024-06-08 21:02:50.938430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182644 ] 00:07:13.191 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.191 [2024-06-08 21:02:50.998579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.191 [2024-06-08 21:02:51.059578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=0x1 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=decompress 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=software 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=32 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=32 00:07:13.191 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.191 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.191 21:02:51 -- accel/accel.sh@21 -- # val=2 00:07:13.192 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.192 21:02:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.192 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.192 21:02:51 -- accel/accel.sh@21 -- # val=Yes 00:07:13.192 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.192 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.192 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:13.192 21:02:51 -- accel/accel.sh@21 -- # val= 00:07:13.192 21:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:13.192 21:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@21 -- # val= 00:07:14.133 21:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:14.133 21:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:14.133 21:02:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.133 21:02:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.133 21:02:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.133 00:07:14.133 real 0m2.568s 00:07:14.133 user 0m2.381s 00:07:14.133 sys 0m0.193s 00:07:14.133 21:02:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.133 21:02:52 -- common/autotest_common.sh@10 -- # set +x 00:07:14.133 ************************************ 00:07:14.133 END TEST accel_decomp_mthread 00:07:14.133 ************************************ 00:07:14.402 21:02:52 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.402 21:02:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:14.402 21:02:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.402 21:02:52 -- common/autotest_common.sh@10 -- # set +x 00:07:14.402 ************************************ 00:07:14.402 START TEST accel_deomp_full_mthread 00:07:14.402 ************************************ 00:07:14.402 21:02:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.402 21:02:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.402 21:02:52 -- accel/accel.sh@17 -- # local accel_module 00:07:14.402 21:02:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.402 21:02:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:14.402 21:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.402 21:02:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.402 21:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.402 21:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.402 21:02:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.402 21:02:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.403 21:02:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.403 21:02:52 -- accel/accel.sh@42 -- # jq -r . 00:07:14.403 [2024-06-08 21:02:52.266826] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:14.403 [2024-06-08 21:02:52.266899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182841 ] 00:07:14.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.403 [2024-06-08 21:02:52.328433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.403 [2024-06-08 21:02:52.394588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.787 21:02:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.787 00:07:15.787 SPDK Configuration: 00:07:15.787 Core mask: 0x1 00:07:15.787 00:07:15.787 Accel Perf Configuration: 00:07:15.787 Workload Type: decompress 00:07:15.787 Transfer size: 111250 bytes 00:07:15.787 Vector count 1 00:07:15.787 Module: software 00:07:15.787 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.787 Queue depth: 32 00:07:15.787 Allocate depth: 32 00:07:15.787 # threads/core: 2 00:07:15.787 Run time: 1 seconds 00:07:15.787 Verify: Yes 00:07:15.787 00:07:15.787 Running for 1 seconds... 00:07:15.787 00:07:15.787 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.787 ------------------------------------------------------------------------------------ 00:07:15.787 0,1 2080/s 85 MiB/s 0 0 00:07:15.787 0,0 2048/s 84 MiB/s 0 0 00:07:15.787 ==================================================================================== 00:07:15.787 Total 4128/s 437 MiB/s 0 0' 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.787 21:02:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.787 21:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.787 21:02:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.787 21:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.787 21:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.787 21:02:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.787 21:02:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.787 21:02:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.787 21:02:53 -- accel/accel.sh@42 -- # jq -r . 00:07:15.787 [2024-06-08 21:02:53.578323] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:15.787 [2024-06-08 21:02:53.578432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183016 ] 00:07:15.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.787 [2024-06-08 21:02:53.640801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.787 [2024-06-08 21:02:53.702782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=0x1 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=decompress 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=software 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=32 00:07:15.787 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.787 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.787 21:02:53 -- accel/accel.sh@21 -- # val=32 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.788 21:02:53 -- accel/accel.sh@21 -- # val=2 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.788 21:02:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.788 21:02:53 -- accel/accel.sh@21 -- # val=Yes 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.788 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.788 21:02:53 -- accel/accel.sh@21 -- # val= 00:07:15.788 21:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.788 21:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@21 -- # val= 00:07:17.172 21:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.172 21:02:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.172 21:02:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.172 21:02:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.172 21:02:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.172 00:07:17.172 real 0m2.628s 00:07:17.172 user 0m2.438s 00:07:17.172 sys 0m0.197s 00:07:17.172 21:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.172 21:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.172 ************************************ 00:07:17.172 END TEST accel_deomp_full_mthread 00:07:17.172 ************************************ 00:07:17.172 21:02:54 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:17.172 21:02:54 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.172 21:02:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:17.172 21:02:54 -- accel/accel.sh@129 -- # build_accel_config 00:07:17.172 21:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.172 21:02:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.172 21:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.172 21:02:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.172 21:02:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.172 21:02:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.172 21:02:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.172 21:02:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.172 21:02:54 -- accel/accel.sh@42 -- # jq -r . 00:07:17.172 ************************************ 00:07:17.172 START TEST accel_dif_functional_tests 00:07:17.172 ************************************ 00:07:17.172 21:02:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.172 [2024-06-08 21:02:54.962082] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:17.172 [2024-06-08 21:02:54.962164] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183368 ] 00:07:17.172 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.172 [2024-06-08 21:02:55.023596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.172 [2024-06-08 21:02:55.087745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.172 [2024-06-08 21:02:55.087866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.172 [2024-06-08 21:02:55.087869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.172 00:07:17.172 00:07:17.172 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.172 http://cunit.sourceforge.net/ 00:07:17.172 00:07:17.172 00:07:17.172 Suite: accel_dif 00:07:17.172 Test: verify: DIF generated, GUARD check ...passed 00:07:17.172 Test: verify: DIF generated, APPTAG check ...passed 00:07:17.172 Test: verify: DIF generated, REFTAG check ...passed 00:07:17.172 Test: verify: DIF not generated, GUARD check ...[2024-06-08 21:02:55.143136] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.172 [2024-06-08 21:02:55.143175] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.172 passed 00:07:17.172 Test: verify: DIF not generated, APPTAG check ...[2024-06-08 21:02:55.143205] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.172 [2024-06-08 21:02:55.143220] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.172 passed 00:07:17.172 Test: verify: DIF not generated, REFTAG check ...[2024-06-08 21:02:55.143236] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.172 [2024-06-08 21:02:55.143250] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.172 passed 00:07:17.172 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:17.172 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-08 21:02:55.143292] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:17.172 passed 00:07:17.172 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:17.172 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:17.172 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:17.172 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-08 21:02:55.143407] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:17.172 passed 00:07:17.172 Test: generate copy: DIF generated, GUARD check ...passed 00:07:17.172 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:17.172 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:17.172 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:17.172 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:17.172 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:17.172 Test: generate copy: iovecs-len validate ...[2024-06-08 21:02:55.143601] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:17.172 passed 00:07:17.172 Test: generate copy: buffer alignment validate ...passed 00:07:17.172 00:07:17.172 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.172 suites 1 1 n/a 0 0 00:07:17.172 tests 20 20 20 0 0 00:07:17.172 asserts 204 204 204 0 n/a 00:07:17.172 00:07:17.172 Elapsed time = 0.002 seconds 00:07:17.172 00:07:17.172 real 0m0.348s 00:07:17.172 user 0m0.487s 00:07:17.173 sys 0m0.123s 00:07:17.173 21:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.173 21:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.173 ************************************ 00:07:17.173 END TEST accel_dif_functional_tests 00:07:17.173 ************************************ 00:07:17.433 00:07:17.433 real 0m54.489s 00:07:17.433 user 1m2.931s 00:07:17.433 sys 0m5.532s 00:07:17.433 21:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.433 21:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 ************************************ 00:07:17.433 END TEST accel 00:07:17.434 ************************************ 00:07:17.434 21:02:55 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.434 21:02:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.434 21:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.434 21:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.434 ************************************ 00:07:17.434 START TEST accel_rpc 00:07:17.434 ************************************ 00:07:17.434 21:02:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.434 * Looking for test storage... 00:07:17.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:17.434 21:02:55 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.434 21:02:55 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2183493 00:07:17.434 21:02:55 -- accel/accel_rpc.sh@15 -- # waitforlisten 2183493 00:07:17.434 21:02:55 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:17.434 21:02:55 -- common/autotest_common.sh@819 -- # '[' -z 2183493 ']' 00:07:17.434 21:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.434 21:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.434 21:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.434 21:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.434 21:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:17.434 [2024-06-08 21:02:55.483266] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:17.434 [2024-06-08 21:02:55.483322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183493 ] 00:07:17.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.695 [2024-06-08 21:02:55.541742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.695 [2024-06-08 21:02:55.603726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.695 [2024-06-08 21:02:55.603867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.267 21:02:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.267 21:02:56 -- common/autotest_common.sh@852 -- # return 0 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:18.267 21:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.267 21:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.267 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.267 ************************************ 00:07:18.267 START TEST accel_assign_opcode 00:07:18.267 ************************************ 00:07:18.267 21:02:56 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:18.267 21:02:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.267 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.267 [2024-06-08 21:02:56.257752] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:18.267 21:02:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:18.267 21:02:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.267 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.267 [2024-06-08 21:02:56.265764] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:18.267 21:02:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.267 21:02:56 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:18.267 21:02:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.267 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 21:02:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.527 21:02:56 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:18.527 21:02:56 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:18.527 21:02:56 -- accel/accel_rpc.sh@42 -- # grep software 00:07:18.527 21:02:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.527 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 21:02:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.527 software 00:07:18.527 00:07:18.527 real 0m0.204s 00:07:18.527 user 0m0.045s 00:07:18.527 sys 0m0.010s 00:07:18.527 21:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.527 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 ************************************ 00:07:18.527 END TEST accel_assign_opcode 00:07:18.528 ************************************ 00:07:18.528 21:02:56 -- accel/accel_rpc.sh@55 -- # killprocess 2183493 00:07:18.528 21:02:56 -- common/autotest_common.sh@926 -- # '[' -z 2183493 ']' 00:07:18.528 21:02:56 -- common/autotest_common.sh@930 -- # kill -0 2183493 00:07:18.528 21:02:56 -- common/autotest_common.sh@931 -- # uname 00:07:18.528 21:02:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:18.528 21:02:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2183493 00:07:18.528 21:02:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:18.528 21:02:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:18.528 21:02:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2183493' 00:07:18.528 killing process with pid 2183493 00:07:18.528 21:02:56 -- common/autotest_common.sh@945 -- # kill 2183493 00:07:18.528 21:02:56 -- common/autotest_common.sh@950 -- # wait 2183493 00:07:18.789 00:07:18.789 real 0m1.417s 00:07:18.789 user 0m1.483s 00:07:18.789 sys 0m0.387s 00:07:18.789 21:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.789 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.789 ************************************ 00:07:18.789 END TEST accel_rpc 00:07:18.789 ************************************ 00:07:18.789 21:02:56 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:18.789 21:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.789 21:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.789 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:18.789 ************************************ 00:07:18.789 START TEST app_cmdline 00:07:18.789 ************************************ 00:07:18.789 21:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:19.051 * Looking for test storage... 00:07:19.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:19.051 21:02:56 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:19.051 21:02:56 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2183838 00:07:19.051 21:02:56 -- app/cmdline.sh@18 -- # waitforlisten 2183838 00:07:19.051 21:02:56 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:19.051 21:02:56 -- common/autotest_common.sh@819 -- # '[' -z 2183838 ']' 00:07:19.051 21:02:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.051 21:02:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:19.051 21:02:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.051 21:02:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:19.051 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:19.051 [2024-06-08 21:02:56.942668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:19.051 [2024-06-08 21:02:56.942721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183838 ] 00:07:19.051 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.051 [2024-06-08 21:02:57.003767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.051 [2024-06-08 21:02:57.066168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.051 [2024-06-08 21:02:57.066311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.624 21:02:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.624 21:02:57 -- common/autotest_common.sh@852 -- # return 0 00:07:19.624 21:02:57 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:19.885 { 00:07:19.885 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:19.885 "fields": { 00:07:19.885 "major": 24, 00:07:19.885 "minor": 1, 00:07:19.885 "patch": 1, 00:07:19.885 "suffix": "-pre", 00:07:19.885 "commit": "130b9406a" 00:07:19.885 } 00:07:19.885 } 00:07:19.885 21:02:57 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:19.885 21:02:57 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:19.885 21:02:57 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:19.885 21:02:57 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:19.885 21:02:57 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:19.885 21:02:57 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:19.885 21:02:57 -- app/cmdline.sh@26 -- # sort 00:07:19.885 21:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.885 21:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 21:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.885 21:02:57 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:19.885 21:02:57 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:19.885 21:02:57 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.885 21:02:57 -- common/autotest_common.sh@640 -- # local es=0 00:07:19.885 21:02:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.885 21:02:57 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.885 21:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:19.885 21:02:57 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.885 21:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:19.885 21:02:57 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.885 21:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:19.885 21:02:57 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.885 21:02:57 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.885 21:02:57 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.146 request: 00:07:20.146 { 00:07:20.146 "method": "env_dpdk_get_mem_stats", 00:07:20.146 "req_id": 1 00:07:20.146 } 00:07:20.146 Got JSON-RPC error response 00:07:20.146 response: 00:07:20.146 { 00:07:20.146 "code": -32601, 00:07:20.146 "message": "Method not found" 00:07:20.146 } 00:07:20.146 21:02:58 -- common/autotest_common.sh@643 -- # es=1 00:07:20.146 21:02:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:20.146 21:02:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:20.146 21:02:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:20.146 21:02:58 -- app/cmdline.sh@1 -- # killprocess 2183838 00:07:20.146 21:02:58 -- common/autotest_common.sh@926 -- # '[' -z 2183838 ']' 00:07:20.146 21:02:58 -- common/autotest_common.sh@930 -- # kill -0 2183838 00:07:20.146 21:02:58 -- common/autotest_common.sh@931 -- # uname 00:07:20.146 21:02:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.146 21:02:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2183838 00:07:20.146 21:02:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.146 21:02:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.146 21:02:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2183838' 00:07:20.146 killing process with pid 2183838 00:07:20.146 21:02:58 -- common/autotest_common.sh@945 -- # kill 2183838 00:07:20.146 21:02:58 -- common/autotest_common.sh@950 -- # wait 2183838 00:07:20.407 00:07:20.407 real 0m1.537s 00:07:20.407 user 0m1.851s 00:07:20.407 sys 0m0.384s 00:07:20.407 21:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.407 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.407 ************************************ 00:07:20.407 END TEST app_cmdline 00:07:20.407 ************************************ 00:07:20.407 21:02:58 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:20.407 21:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.407 21:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.407 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.407 ************************************ 00:07:20.407 START TEST version 00:07:20.407 ************************************ 00:07:20.407 21:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:20.407 * Looking for test storage... 00:07:20.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.407 21:02:58 -- app/version.sh@17 -- # get_header_version major 00:07:20.407 21:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.407 21:02:58 -- app/version.sh@14 -- # cut -f2 00:07:20.407 21:02:58 -- app/version.sh@14 -- # tr -d '"' 00:07:20.407 21:02:58 -- app/version.sh@17 -- # major=24 00:07:20.407 21:02:58 -- app/version.sh@18 -- # get_header_version minor 00:07:20.407 21:02:58 -- app/version.sh@14 -- # tr -d '"' 00:07:20.407 21:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.407 21:02:58 -- app/version.sh@14 -- # cut -f2 00:07:20.407 21:02:58 -- app/version.sh@18 -- # minor=1 00:07:20.407 21:02:58 -- app/version.sh@19 -- # get_header_version patch 00:07:20.407 21:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.407 21:02:58 -- app/version.sh@14 -- # cut -f2 00:07:20.407 21:02:58 -- app/version.sh@14 -- # tr -d '"' 00:07:20.669 21:02:58 -- app/version.sh@19 -- # patch=1 00:07:20.669 21:02:58 -- app/version.sh@20 -- # get_header_version suffix 00:07:20.669 21:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.669 21:02:58 -- app/version.sh@14 -- # cut -f2 00:07:20.669 21:02:58 -- app/version.sh@14 -- # tr -d '"' 00:07:20.669 21:02:58 -- app/version.sh@20 -- # suffix=-pre 00:07:20.669 21:02:58 -- app/version.sh@22 -- # version=24.1 00:07:20.669 21:02:58 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:20.669 21:02:58 -- app/version.sh@25 -- # version=24.1.1 00:07:20.669 21:02:58 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:20.669 21:02:58 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.669 21:02:58 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:20.669 21:02:58 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:20.669 21:02:58 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:20.669 00:07:20.669 real 0m0.164s 00:07:20.669 user 0m0.073s 00:07:20.669 sys 0m0.126s 00:07:20.669 21:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.669 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.669 ************************************ 00:07:20.669 END TEST version 00:07:20.669 ************************************ 00:07:20.669 21:02:58 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@204 -- # uname -s 00:07:20.669 21:02:58 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:20.669 21:02:58 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:20.669 21:02:58 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:20.669 21:02:58 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:20.669 21:02:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:20.669 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.669 21:02:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:20.669 21:02:58 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:20.669 21:02:58 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.669 21:02:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:20.669 21:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.669 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.669 ************************************ 00:07:20.669 START TEST nvmf_tcp 00:07:20.669 ************************************ 00:07:20.669 21:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.669 * Looking for test storage... 00:07:20.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:20.669 21:02:58 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:20.669 21:02:58 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:20.669 21:02:58 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.669 21:02:58 -- nvmf/common.sh@7 -- # uname -s 00:07:20.669 21:02:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.669 21:02:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.669 21:02:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.669 21:02:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.669 21:02:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.669 21:02:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.669 21:02:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.669 21:02:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.669 21:02:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.669 21:02:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.669 21:02:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.669 21:02:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.669 21:02:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.669 21:02:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.669 21:02:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.669 21:02:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.669 21:02:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.669 21:02:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.669 21:02:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.669 21:02:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.670 21:02:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.670 21:02:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.670 21:02:58 -- paths/export.sh@5 -- # export PATH 00:07:20.670 21:02:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.670 21:02:58 -- nvmf/common.sh@46 -- # : 0 00:07:20.670 21:02:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.670 21:02:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.670 21:02:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.670 21:02:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.670 21:02:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.670 21:02:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.670 21:02:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.670 21:02:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.670 21:02:58 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:20.670 21:02:58 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:20.670 21:02:58 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:20.670 21:02:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:20.670 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 21:02:58 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:20.670 21:02:58 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:20.670 21:02:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:20.670 21:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.670 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 ************************************ 00:07:20.670 START TEST nvmf_example 00:07:20.670 ************************************ 00:07:20.670 21:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:20.932 * Looking for test storage... 00:07:20.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.932 21:02:58 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.932 21:02:58 -- nvmf/common.sh@7 -- # uname -s 00:07:20.932 21:02:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.932 21:02:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.932 21:02:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.932 21:02:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.932 21:02:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.932 21:02:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.932 21:02:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.932 21:02:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.932 21:02:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.932 21:02:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.932 21:02:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.932 21:02:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.932 21:02:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.932 21:02:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.932 21:02:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.932 21:02:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.932 21:02:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.932 21:02:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.932 21:02:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.932 21:02:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.932 21:02:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.932 21:02:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.932 21:02:58 -- paths/export.sh@5 -- # export PATH 00:07:20.932 21:02:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.932 21:02:58 -- nvmf/common.sh@46 -- # : 0 00:07:20.932 21:02:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.932 21:02:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.932 21:02:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.932 21:02:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.932 21:02:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.932 21:02:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.932 21:02:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.932 21:02:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.932 21:02:58 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:20.932 21:02:58 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:20.932 21:02:58 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:20.932 21:02:58 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:20.932 21:02:58 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:20.932 21:02:58 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:20.932 21:02:58 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:20.932 21:02:58 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:20.932 21:02:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:20.932 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.932 21:02:58 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:20.932 21:02:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:20.932 21:02:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.932 21:02:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:20.932 21:02:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:20.933 21:02:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:20.933 21:02:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.933 21:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.933 21:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.933 21:02:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:20.933 21:02:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:20.933 21:02:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:20.933 21:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:27.575 21:03:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:27.575 21:03:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:27.575 21:03:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:27.575 21:03:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:27.575 21:03:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:27.575 21:03:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:27.575 21:03:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:27.575 21:03:05 -- nvmf/common.sh@294 -- # net_devs=() 00:07:27.575 21:03:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:27.575 21:03:05 -- nvmf/common.sh@295 -- # e810=() 00:07:27.575 21:03:05 -- nvmf/common.sh@295 -- # local -ga e810 00:07:27.575 21:03:05 -- nvmf/common.sh@296 -- # x722=() 00:07:27.575 21:03:05 -- nvmf/common.sh@296 -- # local -ga x722 00:07:27.575 21:03:05 -- nvmf/common.sh@297 -- # mlx=() 00:07:27.575 21:03:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:27.575 21:03:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.575 21:03:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:27.575 21:03:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:27.575 21:03:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:27.575 21:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:27.575 21:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:27.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:27.575 21:03:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:27.575 21:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:27.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:27.575 21:03:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:27.575 21:03:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:27.575 21:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:27.575 21:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.576 21:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:27.576 21:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.576 21:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:27.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:27.576 21:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.576 21:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:27.576 21:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.576 21:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:27.576 21:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.576 21:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:27.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:27.576 21:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.576 21:03:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:27.576 21:03:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:27.576 21:03:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:27.576 21:03:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:27.576 21:03:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:27.576 21:03:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.576 21:03:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.576 21:03:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.576 21:03:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:27.576 21:03:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.576 21:03:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.576 21:03:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:27.576 21:03:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.576 21:03:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.576 21:03:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:27.576 21:03:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:27.576 21:03:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.576 21:03:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.837 21:03:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.837 21:03:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.837 21:03:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:27.837 21:03:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.837 21:03:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.837 21:03:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.837 21:03:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:27.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:07:27.837 00:07:27.837 --- 10.0.0.2 ping statistics --- 00:07:27.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.837 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:07:27.837 21:03:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:07:27.837 00:07:27.837 --- 10.0.0.1 ping statistics --- 00:07:27.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.837 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:07:27.837 21:03:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.837 21:03:05 -- nvmf/common.sh@410 -- # return 0 00:07:27.837 21:03:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:27.837 21:03:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.837 21:03:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:27.837 21:03:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:27.837 21:03:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.837 21:03:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:27.837 21:03:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:27.837 21:03:05 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:27.837 21:03:05 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:27.837 21:03:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:27.837 21:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 21:03:05 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:27.837 21:03:05 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:27.837 21:03:05 -- target/nvmf_example.sh@34 -- # nvmfpid=2188185 00:07:27.837 21:03:05 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.837 21:03:05 -- target/nvmf_example.sh@36 -- # waitforlisten 2188185 00:07:27.837 21:03:05 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:27.837 21:03:05 -- common/autotest_common.sh@819 -- # '[' -z 2188185 ']' 00:07:27.837 21:03:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.837 21:03:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:27.837 21:03:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.837 21:03:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:27.838 21:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:28.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.670 21:03:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:28.670 21:03:06 -- common/autotest_common.sh@852 -- # return 0 00:07:28.670 21:03:06 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:28.670 21:03:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:28.670 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.931 21:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.931 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.931 21:03:06 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:28.931 21:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.931 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.931 21:03:06 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:28.931 21:03:06 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.931 21:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.931 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.931 21:03:06 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:28.931 21:03:06 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:28.931 21:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.931 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.931 21:03:06 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.931 21:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.931 21:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.931 21:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.931 21:03:06 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:28.931 21:03:06 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:28.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.964 Initializing NVMe Controllers 00:07:38.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:38.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:38.964 Initialization complete. Launching workers. 00:07:38.964 ======================================================== 00:07:38.964 Latency(us) 00:07:38.964 Device Information : IOPS MiB/s Average min max 00:07:38.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15962.50 62.35 4009.33 871.30 15230.12 00:07:38.964 ======================================================== 00:07:38.964 Total : 15962.50 62.35 4009.33 871.30 15230.12 00:07:38.964 00:07:38.964 21:03:17 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:38.964 21:03:17 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:38.964 21:03:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:38.964 21:03:17 -- nvmf/common.sh@116 -- # sync 00:07:38.964 21:03:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:38.964 21:03:17 -- nvmf/common.sh@119 -- # set +e 00:07:38.964 21:03:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:38.964 21:03:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:38.964 rmmod nvme_tcp 00:07:38.964 rmmod nvme_fabrics 00:07:39.225 rmmod nvme_keyring 00:07:39.225 21:03:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:39.225 21:03:17 -- nvmf/common.sh@123 -- # set -e 00:07:39.225 21:03:17 -- nvmf/common.sh@124 -- # return 0 00:07:39.225 21:03:17 -- nvmf/common.sh@477 -- # '[' -n 2188185 ']' 00:07:39.225 21:03:17 -- nvmf/common.sh@478 -- # killprocess 2188185 00:07:39.225 21:03:17 -- common/autotest_common.sh@926 -- # '[' -z 2188185 ']' 00:07:39.225 21:03:17 -- common/autotest_common.sh@930 -- # kill -0 2188185 00:07:39.225 21:03:17 -- common/autotest_common.sh@931 -- # uname 00:07:39.225 21:03:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.225 21:03:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2188185 00:07:39.225 21:03:17 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:39.225 21:03:17 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:39.225 21:03:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2188185' 00:07:39.225 killing process with pid 2188185 00:07:39.225 21:03:17 -- common/autotest_common.sh@945 -- # kill 2188185 00:07:39.225 21:03:17 -- common/autotest_common.sh@950 -- # wait 2188185 00:07:39.225 nvmf threads initialize successfully 00:07:39.225 bdev subsystem init successfully 00:07:39.225 created a nvmf target service 00:07:39.225 create targets's poll groups done 00:07:39.225 all subsystems of target started 00:07:39.225 nvmf target is running 00:07:39.225 all subsystems of target stopped 00:07:39.225 destroy targets's poll groups done 00:07:39.225 destroyed the nvmf target service 00:07:39.225 bdev subsystem finish successfully 00:07:39.225 nvmf threads destroy successfully 00:07:39.225 21:03:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:39.225 21:03:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:39.225 21:03:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:39.225 21:03:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.225 21:03:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:39.225 21:03:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.225 21:03:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.225 21:03:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.773 21:03:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:41.773 21:03:19 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:41.773 21:03:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:41.773 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:41.773 00:07:41.773 real 0m20.632s 00:07:41.774 user 0m46.041s 00:07:41.774 sys 0m6.312s 00:07:41.774 21:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.774 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:41.774 ************************************ 00:07:41.774 END TEST nvmf_example 00:07:41.774 ************************************ 00:07:41.774 21:03:19 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.774 21:03:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:41.774 21:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.774 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:41.774 ************************************ 00:07:41.774 START TEST nvmf_filesystem 00:07:41.774 ************************************ 00:07:41.774 21:03:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.774 * Looking for test storage... 00:07:41.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.774 21:03:19 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:41.774 21:03:19 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:41.774 21:03:19 -- common/autotest_common.sh@34 -- # set -e 00:07:41.774 21:03:19 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:41.774 21:03:19 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:41.774 21:03:19 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:41.774 21:03:19 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:41.774 21:03:19 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:41.774 21:03:19 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:41.774 21:03:19 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:41.774 21:03:19 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:41.774 21:03:19 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:41.774 21:03:19 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:41.774 21:03:19 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:41.774 21:03:19 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:41.774 21:03:19 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:41.774 21:03:19 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:41.774 21:03:19 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:41.774 21:03:19 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:41.774 21:03:19 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:41.774 21:03:19 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:41.774 21:03:19 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:41.774 21:03:19 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:41.774 21:03:19 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.774 21:03:19 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:41.774 21:03:19 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:41.774 21:03:19 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:41.774 21:03:19 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:41.774 21:03:19 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:41.774 21:03:19 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:41.774 21:03:19 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:41.774 21:03:19 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:41.774 21:03:19 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:41.774 21:03:19 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:41.774 21:03:19 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:41.774 21:03:19 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:41.774 21:03:19 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:41.774 21:03:19 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:41.774 21:03:19 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:41.774 21:03:19 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:41.774 21:03:19 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:41.774 21:03:19 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:41.774 21:03:19 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:41.774 21:03:19 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:41.774 21:03:19 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:41.774 21:03:19 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:41.774 21:03:19 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:41.774 21:03:19 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:41.774 21:03:19 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:41.774 21:03:19 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:41.774 21:03:19 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:41.774 21:03:19 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:41.774 21:03:19 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:41.774 21:03:19 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:41.774 21:03:19 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:41.774 21:03:19 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:41.774 21:03:19 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:41.774 21:03:19 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:41.774 21:03:19 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:41.774 21:03:19 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:41.774 21:03:19 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:41.774 21:03:19 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:41.774 21:03:19 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:41.774 21:03:19 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:41.774 21:03:19 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:41.774 21:03:19 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:41.774 21:03:19 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:41.774 21:03:19 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:41.774 21:03:19 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:41.774 21:03:19 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:41.774 21:03:19 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:41.774 21:03:19 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:41.774 21:03:19 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:41.774 21:03:19 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:41.774 21:03:19 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:41.774 21:03:19 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:41.774 21:03:19 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:41.774 21:03:19 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:41.774 21:03:19 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:41.774 21:03:19 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:41.774 21:03:19 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.774 21:03:19 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.774 21:03:19 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.774 21:03:19 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.774 21:03:19 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.774 21:03:19 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.774 21:03:19 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.774 21:03:19 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.774 21:03:19 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:41.774 21:03:19 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:41.774 21:03:19 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:41.774 21:03:19 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:41.774 21:03:19 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:41.774 21:03:19 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:41.774 21:03:19 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:41.774 21:03:19 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:41.774 #define SPDK_CONFIG_H 00:07:41.774 #define SPDK_CONFIG_APPS 1 00:07:41.774 #define SPDK_CONFIG_ARCH native 00:07:41.774 #undef SPDK_CONFIG_ASAN 00:07:41.774 #undef SPDK_CONFIG_AVAHI 00:07:41.774 #undef SPDK_CONFIG_CET 00:07:41.774 #define SPDK_CONFIG_COVERAGE 1 00:07:41.774 #define SPDK_CONFIG_CROSS_PREFIX 00:07:41.775 #undef SPDK_CONFIG_CRYPTO 00:07:41.775 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:41.775 #undef SPDK_CONFIG_CUSTOMOCF 00:07:41.775 #undef SPDK_CONFIG_DAOS 00:07:41.775 #define SPDK_CONFIG_DAOS_DIR 00:07:41.775 #define SPDK_CONFIG_DEBUG 1 00:07:41.775 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:41.775 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:41.775 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:41.775 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:41.775 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:41.775 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.775 #define SPDK_CONFIG_EXAMPLES 1 00:07:41.775 #undef SPDK_CONFIG_FC 00:07:41.775 #define SPDK_CONFIG_FC_PATH 00:07:41.775 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:41.775 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:41.775 #undef SPDK_CONFIG_FUSE 00:07:41.775 #undef SPDK_CONFIG_FUZZER 00:07:41.775 #define SPDK_CONFIG_FUZZER_LIB 00:07:41.775 #undef SPDK_CONFIG_GOLANG 00:07:41.775 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:41.775 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:41.775 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:41.775 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:41.775 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:41.775 #define SPDK_CONFIG_IDXD 1 00:07:41.775 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:41.775 #undef SPDK_CONFIG_IPSEC_MB 00:07:41.775 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:41.775 #define SPDK_CONFIG_ISAL 1 00:07:41.775 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:41.775 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:41.775 #define SPDK_CONFIG_LIBDIR 00:07:41.775 #undef SPDK_CONFIG_LTO 00:07:41.775 #define SPDK_CONFIG_MAX_LCORES 00:07:41.775 #define SPDK_CONFIG_NVME_CUSE 1 00:07:41.775 #undef SPDK_CONFIG_OCF 00:07:41.775 #define SPDK_CONFIG_OCF_PATH 00:07:41.775 #define SPDK_CONFIG_OPENSSL_PATH 00:07:41.775 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:41.775 #undef SPDK_CONFIG_PGO_USE 00:07:41.775 #define SPDK_CONFIG_PREFIX /usr/local 00:07:41.775 #undef SPDK_CONFIG_RAID5F 00:07:41.775 #undef SPDK_CONFIG_RBD 00:07:41.775 #define SPDK_CONFIG_RDMA 1 00:07:41.775 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:41.775 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:41.775 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:41.775 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:41.775 #define SPDK_CONFIG_SHARED 1 00:07:41.775 #undef SPDK_CONFIG_SMA 00:07:41.775 #define SPDK_CONFIG_TESTS 1 00:07:41.775 #undef SPDK_CONFIG_TSAN 00:07:41.775 #define SPDK_CONFIG_UBLK 1 00:07:41.775 #define SPDK_CONFIG_UBSAN 1 00:07:41.775 #undef SPDK_CONFIG_UNIT_TESTS 00:07:41.775 #undef SPDK_CONFIG_URING 00:07:41.775 #define SPDK_CONFIG_URING_PATH 00:07:41.775 #undef SPDK_CONFIG_URING_ZNS 00:07:41.775 #undef SPDK_CONFIG_USDT 00:07:41.775 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:41.775 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:41.775 #undef SPDK_CONFIG_VFIO_USER 00:07:41.775 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:41.775 #define SPDK_CONFIG_VHOST 1 00:07:41.775 #define SPDK_CONFIG_VIRTIO 1 00:07:41.775 #undef SPDK_CONFIG_VTUNE 00:07:41.775 #define SPDK_CONFIG_VTUNE_DIR 00:07:41.775 #define SPDK_CONFIG_WERROR 1 00:07:41.775 #define SPDK_CONFIG_WPDK_DIR 00:07:41.775 #undef SPDK_CONFIG_XNVME 00:07:41.775 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:41.775 21:03:19 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:41.775 21:03:19 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.775 21:03:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.775 21:03:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.775 21:03:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.775 21:03:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.775 21:03:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.775 21:03:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.775 21:03:19 -- paths/export.sh@5 -- # export PATH 00:07:41.775 21:03:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.775 21:03:19 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.775 21:03:19 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.775 21:03:19 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.775 21:03:19 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.775 21:03:19 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:41.775 21:03:19 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.775 21:03:19 -- pm/common@16 -- # TEST_TAG=N/A 00:07:41.775 21:03:19 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:41.775 21:03:19 -- common/autotest_common.sh@52 -- # : 1 00:07:41.775 21:03:19 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:41.775 21:03:19 -- common/autotest_common.sh@56 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:41.775 21:03:19 -- common/autotest_common.sh@58 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:41.775 21:03:19 -- common/autotest_common.sh@60 -- # : 1 00:07:41.775 21:03:19 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:41.775 21:03:19 -- common/autotest_common.sh@62 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:41.775 21:03:19 -- common/autotest_common.sh@64 -- # : 00:07:41.775 21:03:19 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:41.775 21:03:19 -- common/autotest_common.sh@66 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:41.775 21:03:19 -- common/autotest_common.sh@68 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:41.775 21:03:19 -- common/autotest_common.sh@70 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:41.775 21:03:19 -- common/autotest_common.sh@72 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:41.775 21:03:19 -- common/autotest_common.sh@74 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:41.775 21:03:19 -- common/autotest_common.sh@76 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:41.775 21:03:19 -- common/autotest_common.sh@78 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:41.775 21:03:19 -- common/autotest_common.sh@80 -- # : 1 00:07:41.775 21:03:19 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:41.775 21:03:19 -- common/autotest_common.sh@82 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:41.775 21:03:19 -- common/autotest_common.sh@84 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:41.775 21:03:19 -- common/autotest_common.sh@86 -- # : 1 00:07:41.775 21:03:19 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:41.775 21:03:19 -- common/autotest_common.sh@88 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:41.775 21:03:19 -- common/autotest_common.sh@90 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:41.775 21:03:19 -- common/autotest_common.sh@92 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:41.775 21:03:19 -- common/autotest_common.sh@94 -- # : 0 00:07:41.775 21:03:19 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:41.775 21:03:19 -- common/autotest_common.sh@96 -- # : tcp 00:07:41.776 21:03:19 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:41.776 21:03:19 -- common/autotest_common.sh@98 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:41.776 21:03:19 -- common/autotest_common.sh@100 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:41.776 21:03:19 -- common/autotest_common.sh@102 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:41.776 21:03:19 -- common/autotest_common.sh@104 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:41.776 21:03:19 -- common/autotest_common.sh@106 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:41.776 21:03:19 -- common/autotest_common.sh@108 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:41.776 21:03:19 -- common/autotest_common.sh@110 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:41.776 21:03:19 -- common/autotest_common.sh@112 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:41.776 21:03:19 -- common/autotest_common.sh@114 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:41.776 21:03:19 -- common/autotest_common.sh@116 -- # : 1 00:07:41.776 21:03:19 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:41.776 21:03:19 -- common/autotest_common.sh@118 -- # : 00:07:41.776 21:03:19 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:41.776 21:03:19 -- common/autotest_common.sh@120 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:41.776 21:03:19 -- common/autotest_common.sh@122 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:41.776 21:03:19 -- common/autotest_common.sh@124 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:41.776 21:03:19 -- common/autotest_common.sh@126 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:41.776 21:03:19 -- common/autotest_common.sh@128 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:41.776 21:03:19 -- common/autotest_common.sh@130 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:41.776 21:03:19 -- common/autotest_common.sh@132 -- # : 00:07:41.776 21:03:19 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:41.776 21:03:19 -- common/autotest_common.sh@134 -- # : true 00:07:41.776 21:03:19 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:41.776 21:03:19 -- common/autotest_common.sh@136 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:41.776 21:03:19 -- common/autotest_common.sh@138 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:41.776 21:03:19 -- common/autotest_common.sh@140 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:41.776 21:03:19 -- common/autotest_common.sh@142 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:41.776 21:03:19 -- common/autotest_common.sh@144 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:41.776 21:03:19 -- common/autotest_common.sh@146 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:41.776 21:03:19 -- common/autotest_common.sh@148 -- # : e810 00:07:41.776 21:03:19 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:41.776 21:03:19 -- common/autotest_common.sh@150 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:41.776 21:03:19 -- common/autotest_common.sh@152 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:41.776 21:03:19 -- common/autotest_common.sh@154 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:41.776 21:03:19 -- common/autotest_common.sh@156 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:41.776 21:03:19 -- common/autotest_common.sh@158 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:41.776 21:03:19 -- common/autotest_common.sh@160 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:41.776 21:03:19 -- common/autotest_common.sh@163 -- # : 00:07:41.776 21:03:19 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:41.776 21:03:19 -- common/autotest_common.sh@165 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:41.776 21:03:19 -- common/autotest_common.sh@167 -- # : 0 00:07:41.776 21:03:19 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:41.776 21:03:19 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.776 21:03:19 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.776 21:03:19 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.776 21:03:19 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.776 21:03:19 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.776 21:03:19 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:41.776 21:03:19 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:41.776 21:03:19 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.776 21:03:19 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.776 21:03:19 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.776 21:03:19 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.776 21:03:19 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:41.776 21:03:19 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:41.776 21:03:19 -- common/autotest_common.sh@196 -- # cat 00:07:41.776 21:03:19 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:41.776 21:03:19 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.776 21:03:19 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.776 21:03:19 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.776 21:03:19 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.776 21:03:19 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:41.776 21:03:19 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:41.777 21:03:19 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.777 21:03:19 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.777 21:03:19 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.777 21:03:19 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.777 21:03:19 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.777 21:03:19 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.777 21:03:19 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.777 21:03:19 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.777 21:03:19 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.777 21:03:19 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.777 21:03:19 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.777 21:03:19 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.777 21:03:19 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:41.777 21:03:19 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:41.777 21:03:19 -- common/autotest_common.sh@249 -- # valgrind= 00:07:41.777 21:03:19 -- common/autotest_common.sh@255 -- # uname -s 00:07:41.777 21:03:19 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:41.777 21:03:19 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:41.777 21:03:19 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:41.777 21:03:19 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:41.777 21:03:19 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:41.777 21:03:19 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:41.777 21:03:19 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:41.777 21:03:19 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:41.777 21:03:19 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:41.777 21:03:19 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:41.777 21:03:19 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:41.777 21:03:19 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:41.777 21:03:19 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:41.777 21:03:19 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:41.777 21:03:19 -- common/autotest_common.sh@309 -- # [[ -z 2191464 ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@309 -- # kill -0 2191464 00:07:41.777 21:03:19 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:41.777 21:03:19 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:41.777 21:03:19 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:41.777 21:03:19 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:41.777 21:03:19 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:41.777 21:03:19 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:41.777 21:03:19 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:41.777 21:03:19 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.eqfkx9 00:07:41.777 21:03:19 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:41.777 21:03:19 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eqfkx9/tests/target /tmp/spdk.eqfkx9 00:07:41.777 21:03:19 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@318 -- # df -T 00:07:41.777 21:03:19 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=956665856 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327763968 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=118759456768 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370980352 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=10611523584 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=64682897408 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864499200 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874198528 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=9699328 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=216064 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=287744 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=64684163072 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=1327104 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:07:41.777 21:03:19 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:07:41.777 21:03:19 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:41.777 21:03:19 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:41.777 21:03:19 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:41.777 * Looking for test storage... 00:07:41.777 21:03:19 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:41.777 21:03:19 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:41.777 21:03:19 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.777 21:03:19 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:41.777 21:03:19 -- common/autotest_common.sh@363 -- # mount=/ 00:07:41.777 21:03:19 -- common/autotest_common.sh@365 -- # target_space=118759456768 00:07:41.777 21:03:19 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:41.777 21:03:19 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:41.777 21:03:19 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:41.777 21:03:19 -- common/autotest_common.sh@372 -- # new_size=12826116096 00:07:41.777 21:03:19 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:41.777 21:03:19 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.777 21:03:19 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.778 21:03:19 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.778 21:03:19 -- common/autotest_common.sh@380 -- # return 0 00:07:41.778 21:03:19 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:41.778 21:03:19 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:41.778 21:03:19 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:41.778 21:03:19 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:41.778 21:03:19 -- common/autotest_common.sh@1672 -- # true 00:07:41.778 21:03:19 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:41.778 21:03:19 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:41.778 21:03:19 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:41.778 21:03:19 -- common/autotest_common.sh@27 -- # exec 00:07:41.778 21:03:19 -- common/autotest_common.sh@29 -- # exec 00:07:41.778 21:03:19 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:41.778 21:03:19 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:41.778 21:03:19 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:41.778 21:03:19 -- common/autotest_common.sh@18 -- # set -x 00:07:41.778 21:03:19 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.778 21:03:19 -- nvmf/common.sh@7 -- # uname -s 00:07:41.778 21:03:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.778 21:03:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.778 21:03:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.778 21:03:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.778 21:03:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.778 21:03:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.778 21:03:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.778 21:03:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.778 21:03:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.778 21:03:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.778 21:03:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:41.778 21:03:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:41.778 21:03:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.778 21:03:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.778 21:03:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.778 21:03:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.778 21:03:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.778 21:03:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.778 21:03:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.778 21:03:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.778 21:03:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.778 21:03:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.778 21:03:19 -- paths/export.sh@5 -- # export PATH 00:07:41.778 21:03:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.778 21:03:19 -- nvmf/common.sh@46 -- # : 0 00:07:41.778 21:03:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:41.778 21:03:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:41.778 21:03:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:41.778 21:03:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.778 21:03:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.778 21:03:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:41.778 21:03:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:41.778 21:03:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:41.778 21:03:19 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:41.778 21:03:19 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:41.778 21:03:19 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:41.778 21:03:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:41.778 21:03:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.778 21:03:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:41.778 21:03:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:41.778 21:03:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:41.778 21:03:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.778 21:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.778 21:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.778 21:03:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:41.778 21:03:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:41.778 21:03:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:41.778 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:48.377 21:03:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:48.377 21:03:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:48.377 21:03:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:48.377 21:03:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:48.377 21:03:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:48.377 21:03:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:48.377 21:03:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:48.377 21:03:26 -- nvmf/common.sh@294 -- # net_devs=() 00:07:48.377 21:03:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:48.377 21:03:26 -- nvmf/common.sh@295 -- # e810=() 00:07:48.377 21:03:26 -- nvmf/common.sh@295 -- # local -ga e810 00:07:48.377 21:03:26 -- nvmf/common.sh@296 -- # x722=() 00:07:48.377 21:03:26 -- nvmf/common.sh@296 -- # local -ga x722 00:07:48.377 21:03:26 -- nvmf/common.sh@297 -- # mlx=() 00:07:48.377 21:03:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:48.377 21:03:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.377 21:03:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:48.377 21:03:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:48.377 21:03:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:48.377 21:03:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:48.377 21:03:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:48.377 21:03:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.378 21:03:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:48.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:48.378 21:03:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.378 21:03:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:48.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:48.378 21:03:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.378 21:03:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.378 21:03:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.378 21:03:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:48.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:48.378 21:03:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.378 21:03:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.378 21:03:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.378 21:03:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.378 21:03:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:48.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:48.378 21:03:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.378 21:03:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:48.378 21:03:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:48.378 21:03:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:48.378 21:03:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.378 21:03:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.378 21:03:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.378 21:03:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:48.378 21:03:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.378 21:03:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.378 21:03:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:48.378 21:03:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.378 21:03:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.378 21:03:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:48.378 21:03:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:48.378 21:03:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.378 21:03:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.639 21:03:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.639 21:03:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.639 21:03:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:48.639 21:03:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.639 21:03:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.639 21:03:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.639 21:03:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:48.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:07:48.639 00:07:48.639 --- 10.0.0.2 ping statistics --- 00:07:48.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.639 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:07:48.639 21:03:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:07:48.639 00:07:48.639 --- 10.0.0.1 ping statistics --- 00:07:48.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.639 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:07:48.639 21:03:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.639 21:03:26 -- nvmf/common.sh@410 -- # return 0 00:07:48.639 21:03:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.639 21:03:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.639 21:03:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.639 21:03:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.639 21:03:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.639 21:03:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.639 21:03:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.900 21:03:26 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:48.900 21:03:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:48.900 21:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.900 21:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 ************************************ 00:07:48.900 START TEST nvmf_filesystem_no_in_capsule 00:07:48.900 ************************************ 00:07:48.900 21:03:26 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:48.900 21:03:26 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:48.900 21:03:26 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.900 21:03:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:48.900 21:03:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.900 21:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 21:03:26 -- nvmf/common.sh@469 -- # nvmfpid=2195289 00:07:48.900 21:03:26 -- nvmf/common.sh@470 -- # waitforlisten 2195289 00:07:48.900 21:03:26 -- common/autotest_common.sh@819 -- # '[' -z 2195289 ']' 00:07:48.900 21:03:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.900 21:03:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.900 21:03:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.901 21:03:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.901 21:03:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.901 21:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:48.901 [2024-06-08 21:03:26.830651] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:48.901 [2024-06-08 21:03:26.830714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.901 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.901 [2024-06-08 21:03:26.896609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.901 [2024-06-08 21:03:26.961551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.901 [2024-06-08 21:03:26.961674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.901 [2024-06-08 21:03:26.961682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.901 [2024-06-08 21:03:26.961689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.901 [2024-06-08 21:03:26.961836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.901 [2024-06-08 21:03:26.961974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.901 [2024-06-08 21:03:26.962126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.901 [2024-06-08 21:03:26.962127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.841 21:03:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.841 21:03:27 -- common/autotest_common.sh@852 -- # return 0 00:07:49.841 21:03:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:49.841 21:03:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.841 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 21:03:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.841 21:03:27 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:49.841 21:03:27 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 [2024-06-08 21:03:27.646594] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 [2024-06-08 21:03:27.774633] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:49.842 21:03:27 -- common/autotest_common.sh@1359 -- # local bs 00:07:49.842 21:03:27 -- common/autotest_common.sh@1360 -- # local nb 00:07:49.842 21:03:27 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:49.842 21:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.842 21:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.842 21:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.842 21:03:27 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:49.842 { 00:07:49.842 "name": "Malloc1", 00:07:49.842 "aliases": [ 00:07:49.842 "a4ed46c6-dd5b-4ed9-9a77-3020ab3aa9e4" 00:07:49.842 ], 00:07:49.842 "product_name": "Malloc disk", 00:07:49.842 "block_size": 512, 00:07:49.842 "num_blocks": 1048576, 00:07:49.842 "uuid": "a4ed46c6-dd5b-4ed9-9a77-3020ab3aa9e4", 00:07:49.842 "assigned_rate_limits": { 00:07:49.842 "rw_ios_per_sec": 0, 00:07:49.842 "rw_mbytes_per_sec": 0, 00:07:49.842 "r_mbytes_per_sec": 0, 00:07:49.842 "w_mbytes_per_sec": 0 00:07:49.842 }, 00:07:49.842 "claimed": true, 00:07:49.842 "claim_type": "exclusive_write", 00:07:49.842 "zoned": false, 00:07:49.842 "supported_io_types": { 00:07:49.842 "read": true, 00:07:49.842 "write": true, 00:07:49.842 "unmap": true, 00:07:49.842 "write_zeroes": true, 00:07:49.842 "flush": true, 00:07:49.842 "reset": true, 00:07:49.842 "compare": false, 00:07:49.842 "compare_and_write": false, 00:07:49.842 "abort": true, 00:07:49.842 "nvme_admin": false, 00:07:49.842 "nvme_io": false 00:07:49.842 }, 00:07:49.842 "memory_domains": [ 00:07:49.842 { 00:07:49.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.842 "dma_device_type": 2 00:07:49.842 } 00:07:49.842 ], 00:07:49.842 "driver_specific": {} 00:07:49.842 } 00:07:49.842 ]' 00:07:49.842 21:03:27 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:49.842 21:03:27 -- common/autotest_common.sh@1362 -- # bs=512 00:07:49.842 21:03:27 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:49.842 21:03:27 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:49.842 21:03:27 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:49.842 21:03:27 -- common/autotest_common.sh@1367 -- # echo 512 00:07:49.842 21:03:27 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:49.842 21:03:27 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:51.754 21:03:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.754 21:03:29 -- common/autotest_common.sh@1177 -- # local i=0 00:07:51.754 21:03:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.754 21:03:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:51.754 21:03:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:53.689 21:03:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:53.689 21:03:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:53.689 21:03:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.689 21:03:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:53.689 21:03:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.689 21:03:31 -- common/autotest_common.sh@1187 -- # return 0 00:07:53.689 21:03:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:53.689 21:03:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:53.689 21:03:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:53.689 21:03:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:53.689 21:03:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:53.689 21:03:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:53.689 21:03:31 -- setup/common.sh@80 -- # echo 536870912 00:07:53.689 21:03:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:53.689 21:03:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:53.689 21:03:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:53.689 21:03:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:53.689 21:03:31 -- target/filesystem.sh@69 -- # partprobe 00:07:53.689 21:03:31 -- target/filesystem.sh@70 -- # sleep 1 00:07:54.641 21:03:32 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:54.641 21:03:32 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:54.641 21:03:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.641 21:03:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.641 21:03:32 -- common/autotest_common.sh@10 -- # set +x 00:07:54.641 ************************************ 00:07:54.641 START TEST filesystem_ext4 00:07:54.641 ************************************ 00:07:54.641 21:03:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:54.641 21:03:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:54.641 21:03:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.641 21:03:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:54.641 21:03:32 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:54.641 21:03:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.641 21:03:32 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.641 21:03:32 -- common/autotest_common.sh@905 -- # local force 00:07:54.641 21:03:32 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:54.641 21:03:32 -- common/autotest_common.sh@908 -- # force=-F 00:07:54.641 21:03:32 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:54.641 mke2fs 1.46.5 (30-Dec-2021) 00:07:54.902 Discarding device blocks: 0/522240 done 00:07:54.902 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:54.902 Filesystem UUID: 4de045a1-732c-474b-a374-78f2fe611889 00:07:54.902 Superblock backups stored on blocks: 00:07:54.902 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:54.902 00:07:54.902 Allocating group tables: 0/64 done 00:07:54.902 Writing inode tables: 0/64 done 00:07:55.479 Creating journal (8192 blocks): done 00:07:55.479 Writing superblocks and filesystem accounting information: 0/64 done 00:07:55.479 00:07:55.479 21:03:33 -- common/autotest_common.sh@921 -- # return 0 00:07:55.479 21:03:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.479 21:03:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.479 21:03:33 -- target/filesystem.sh@25 -- # sync 00:07:55.479 21:03:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.479 21:03:33 -- target/filesystem.sh@27 -- # sync 00:07:55.479 21:03:33 -- target/filesystem.sh@29 -- # i=0 00:07:55.479 21:03:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.479 21:03:33 -- target/filesystem.sh@37 -- # kill -0 2195289 00:07:55.479 21:03:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.479 21:03:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.479 21:03:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.479 21:03:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.479 00:07:55.479 real 0m0.845s 00:07:55.479 user 0m0.029s 00:07:55.479 sys 0m0.065s 00:07:55.479 21:03:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.479 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.479 ************************************ 00:07:55.479 END TEST filesystem_ext4 00:07:55.479 ************************************ 00:07:55.479 21:03:33 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:55.479 21:03:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.479 21:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.479 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.740 ************************************ 00:07:55.740 START TEST filesystem_btrfs 00:07:55.740 ************************************ 00:07:55.740 21:03:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:55.740 21:03:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:55.740 21:03:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.740 21:03:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:55.740 21:03:33 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:55.740 21:03:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:55.740 21:03:33 -- common/autotest_common.sh@904 -- # local i=0 00:07:55.740 21:03:33 -- common/autotest_common.sh@905 -- # local force 00:07:55.740 21:03:33 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:55.740 21:03:33 -- common/autotest_common.sh@910 -- # force=-f 00:07:55.740 21:03:33 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:56.000 btrfs-progs v6.6.2 00:07:56.000 See https://btrfs.readthedocs.io for more information. 00:07:56.000 00:07:56.000 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:56.000 NOTE: several default settings have changed in version 5.15, please make sure 00:07:56.000 this does not affect your deployments: 00:07:56.000 - DUP for metadata (-m dup) 00:07:56.000 - enabled no-holes (-O no-holes) 00:07:56.000 - enabled free-space-tree (-R free-space-tree) 00:07:56.000 00:07:56.000 Label: (null) 00:07:56.000 UUID: 14b18a8b-9f4c-410d-beff-9ccfc1cb6e7d 00:07:56.000 Node size: 16384 00:07:56.000 Sector size: 4096 00:07:56.000 Filesystem size: 510.00MiB 00:07:56.000 Block group profiles: 00:07:56.000 Data: single 8.00MiB 00:07:56.000 Metadata: DUP 32.00MiB 00:07:56.000 System: DUP 8.00MiB 00:07:56.000 SSD detected: yes 00:07:56.000 Zoned device: no 00:07:56.000 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:56.000 Runtime features: free-space-tree 00:07:56.000 Checksum: crc32c 00:07:56.000 Number of devices: 1 00:07:56.000 Devices: 00:07:56.000 ID SIZE PATH 00:07:56.000 1 510.00MiB /dev/nvme0n1p1 00:07:56.000 00:07:56.000 21:03:34 -- common/autotest_common.sh@921 -- # return 0 00:07:56.000 21:03:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.260 21:03:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.260 21:03:34 -- target/filesystem.sh@25 -- # sync 00:07:56.260 21:03:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.260 21:03:34 -- target/filesystem.sh@27 -- # sync 00:07:56.260 21:03:34 -- target/filesystem.sh@29 -- # i=0 00:07:56.260 21:03:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.520 21:03:34 -- target/filesystem.sh@37 -- # kill -0 2195289 00:07:56.520 21:03:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.520 21:03:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.520 21:03:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.520 21:03:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.520 00:07:56.520 real 0m0.817s 00:07:56.520 user 0m0.035s 00:07:56.520 sys 0m0.126s 00:07:56.520 21:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.520 21:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:56.520 ************************************ 00:07:56.520 END TEST filesystem_btrfs 00:07:56.520 ************************************ 00:07:56.520 21:03:34 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:56.520 21:03:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:56.520 21:03:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.520 21:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:56.520 ************************************ 00:07:56.520 START TEST filesystem_xfs 00:07:56.520 ************************************ 00:07:56.520 21:03:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.520 21:03:34 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.520 21:03:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.520 21:03:34 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.520 21:03:34 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:56.520 21:03:34 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:56.520 21:03:34 -- common/autotest_common.sh@904 -- # local i=0 00:07:56.520 21:03:34 -- common/autotest_common.sh@905 -- # local force 00:07:56.520 21:03:34 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:56.520 21:03:34 -- common/autotest_common.sh@910 -- # force=-f 00:07:56.520 21:03:34 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:56.520 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:56.520 = sectsz=512 attr=2, projid32bit=1 00:07:56.520 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:56.520 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:56.520 data = bsize=4096 blocks=130560, imaxpct=25 00:07:56.520 = sunit=0 swidth=0 blks 00:07:56.520 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:56.520 log =internal log bsize=4096 blocks=16384, version=2 00:07:56.520 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:56.520 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:57.464 Discarding blocks...Done. 00:07:57.464 21:03:35 -- common/autotest_common.sh@921 -- # return 0 00:07:57.464 21:03:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.376 21:03:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.376 21:03:37 -- target/filesystem.sh@25 -- # sync 00:07:59.376 21:03:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.376 21:03:37 -- target/filesystem.sh@27 -- # sync 00:07:59.376 21:03:37 -- target/filesystem.sh@29 -- # i=0 00:07:59.377 21:03:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.377 21:03:37 -- target/filesystem.sh@37 -- # kill -0 2195289 00:07:59.377 21:03:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.377 21:03:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.377 21:03:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.377 21:03:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.377 00:07:59.377 real 0m2.952s 00:07:59.377 user 0m0.021s 00:07:59.377 sys 0m0.080s 00:07:59.377 21:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.377 21:03:37 -- common/autotest_common.sh@10 -- # set +x 00:07:59.377 ************************************ 00:07:59.377 END TEST filesystem_xfs 00:07:59.377 ************************************ 00:07:59.377 21:03:37 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.638 21:03:37 -- target/filesystem.sh@93 -- # sync 00:07:59.638 21:03:37 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.899 21:03:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.899 21:03:37 -- common/autotest_common.sh@1198 -- # local i=0 00:07:59.899 21:03:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:59.899 21:03:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.899 21:03:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:59.899 21:03:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.899 21:03:37 -- common/autotest_common.sh@1210 -- # return 0 00:07:59.899 21:03:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.899 21:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.899 21:03:37 -- common/autotest_common.sh@10 -- # set +x 00:07:59.899 21:03:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.899 21:03:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.899 21:03:37 -- target/filesystem.sh@101 -- # killprocess 2195289 00:07:59.899 21:03:37 -- common/autotest_common.sh@926 -- # '[' -z 2195289 ']' 00:07:59.899 21:03:37 -- common/autotest_common.sh@930 -- # kill -0 2195289 00:07:59.899 21:03:37 -- common/autotest_common.sh@931 -- # uname 00:07:59.899 21:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:59.899 21:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2195289 00:07:59.899 21:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:59.899 21:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:59.899 21:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2195289' 00:07:59.899 killing process with pid 2195289 00:07:59.899 21:03:37 -- common/autotest_common.sh@945 -- # kill 2195289 00:07:59.899 21:03:37 -- common/autotest_common.sh@950 -- # wait 2195289 00:08:00.160 21:03:38 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.160 00:08:00.160 real 0m11.387s 00:08:00.160 user 0m44.836s 00:08:00.160 sys 0m1.108s 00:08:00.160 21:03:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.160 21:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:00.160 ************************************ 00:08:00.160 END TEST nvmf_filesystem_no_in_capsule 00:08:00.160 ************************************ 00:08:00.160 21:03:38 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:00.160 21:03:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.160 21:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.160 21:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:00.160 ************************************ 00:08:00.160 START TEST nvmf_filesystem_in_capsule 00:08:00.160 ************************************ 00:08:00.160 21:03:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:00.160 21:03:38 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:00.160 21:03:38 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:00.160 21:03:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:00.160 21:03:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.160 21:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:00.160 21:03:38 -- nvmf/common.sh@469 -- # nvmfpid=2197613 00:08:00.160 21:03:38 -- nvmf/common.sh@470 -- # waitforlisten 2197613 00:08:00.160 21:03:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.160 21:03:38 -- common/autotest_common.sh@819 -- # '[' -z 2197613 ']' 00:08:00.160 21:03:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.160 21:03:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:00.160 21:03:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.160 21:03:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:00.160 21:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:00.421 [2024-06-08 21:03:38.259181] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:00.421 [2024-06-08 21:03:38.259231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.421 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.421 [2024-06-08 21:03:38.323889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.421 [2024-06-08 21:03:38.387545] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:00.421 [2024-06-08 21:03:38.387673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.421 [2024-06-08 21:03:38.387683] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.421 [2024-06-08 21:03:38.387691] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.421 [2024-06-08 21:03:38.387826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.421 [2024-06-08 21:03:38.387930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.421 [2024-06-08 21:03:38.388067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.421 [2024-06-08 21:03:38.388068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.993 21:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.993 21:03:39 -- common/autotest_common.sh@852 -- # return 0 00:08:00.993 21:03:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:00.993 21:03:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:00.993 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.993 21:03:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.993 21:03:39 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.993 21:03:39 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:00.993 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.993 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.993 [2024-06-08 21:03:39.060588] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.993 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.993 21:03:39 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.993 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.993 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:01.254 Malloc1 00:08:01.254 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.254 21:03:39 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:01.254 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.254 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:01.254 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.254 21:03:39 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:01.254 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.254 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:01.254 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.254 21:03:39 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.254 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.254 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:01.254 [2024-06-08 21:03:39.183360] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.254 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.254 21:03:39 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:01.254 21:03:39 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:01.254 21:03:39 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:01.254 21:03:39 -- common/autotest_common.sh@1359 -- # local bs 00:08:01.254 21:03:39 -- common/autotest_common.sh@1360 -- # local nb 00:08:01.254 21:03:39 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:01.254 21:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.254 21:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:01.254 21:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.254 21:03:39 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:01.254 { 00:08:01.254 "name": "Malloc1", 00:08:01.254 "aliases": [ 00:08:01.254 "17a25587-68ce-4a19-b21d-62bbcab4f48d" 00:08:01.254 ], 00:08:01.254 "product_name": "Malloc disk", 00:08:01.254 "block_size": 512, 00:08:01.254 "num_blocks": 1048576, 00:08:01.254 "uuid": "17a25587-68ce-4a19-b21d-62bbcab4f48d", 00:08:01.254 "assigned_rate_limits": { 00:08:01.254 "rw_ios_per_sec": 0, 00:08:01.254 "rw_mbytes_per_sec": 0, 00:08:01.254 "r_mbytes_per_sec": 0, 00:08:01.254 "w_mbytes_per_sec": 0 00:08:01.254 }, 00:08:01.254 "claimed": true, 00:08:01.254 "claim_type": "exclusive_write", 00:08:01.254 "zoned": false, 00:08:01.254 "supported_io_types": { 00:08:01.254 "read": true, 00:08:01.254 "write": true, 00:08:01.254 "unmap": true, 00:08:01.254 "write_zeroes": true, 00:08:01.254 "flush": true, 00:08:01.254 "reset": true, 00:08:01.254 "compare": false, 00:08:01.254 "compare_and_write": false, 00:08:01.254 "abort": true, 00:08:01.254 "nvme_admin": false, 00:08:01.254 "nvme_io": false 00:08:01.254 }, 00:08:01.254 "memory_domains": [ 00:08:01.254 { 00:08:01.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.254 "dma_device_type": 2 00:08:01.254 } 00:08:01.254 ], 00:08:01.254 "driver_specific": {} 00:08:01.254 } 00:08:01.254 ]' 00:08:01.254 21:03:39 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:01.254 21:03:39 -- common/autotest_common.sh@1362 -- # bs=512 00:08:01.254 21:03:39 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:01.254 21:03:39 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:01.254 21:03:39 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:01.254 21:03:39 -- common/autotest_common.sh@1367 -- # echo 512 00:08:01.254 21:03:39 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:01.254 21:03:39 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.167 21:03:40 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.167 21:03:40 -- common/autotest_common.sh@1177 -- # local i=0 00:08:03.167 21:03:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.167 21:03:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:03.167 21:03:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:05.081 21:03:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:05.081 21:03:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:05.081 21:03:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.081 21:03:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:05.081 21:03:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.081 21:03:42 -- common/autotest_common.sh@1187 -- # return 0 00:08:05.081 21:03:42 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.081 21:03:42 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.081 21:03:42 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.081 21:03:42 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.081 21:03:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.081 21:03:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.081 21:03:42 -- setup/common.sh@80 -- # echo 536870912 00:08:05.081 21:03:42 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.081 21:03:42 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.081 21:03:42 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.081 21:03:42 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.342 21:03:43 -- target/filesystem.sh@69 -- # partprobe 00:08:05.914 21:03:43 -- target/filesystem.sh@70 -- # sleep 1 00:08:07.298 21:03:44 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:07.298 21:03:44 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.298 21:03:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:07.298 21:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.298 21:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:07.298 ************************************ 00:08:07.298 START TEST filesystem_in_capsule_ext4 00:08:07.298 ************************************ 00:08:07.298 21:03:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.298 21:03:44 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.298 21:03:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.298 21:03:44 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.298 21:03:44 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:07.298 21:03:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:07.298 21:03:44 -- common/autotest_common.sh@904 -- # local i=0 00:08:07.298 21:03:44 -- common/autotest_common.sh@905 -- # local force 00:08:07.298 21:03:44 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:07.298 21:03:44 -- common/autotest_common.sh@908 -- # force=-F 00:08:07.298 21:03:44 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.298 mke2fs 1.46.5 (30-Dec-2021) 00:08:07.298 Discarding device blocks: 0/522240 done 00:08:07.298 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:07.298 Filesystem UUID: e241ef4c-d5ab-4b98-8adc-7999dcf37e42 00:08:07.298 Superblock backups stored on blocks: 00:08:07.298 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:07.298 00:08:07.298 Allocating group tables: 0/64 done 00:08:07.298 Writing inode tables: 0/64 done 00:08:08.241 Creating journal (8192 blocks): done 00:08:08.241 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.241 00:08:08.241 21:03:46 -- common/autotest_common.sh@921 -- # return 0 00:08:08.241 21:03:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.503 21:03:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.503 21:03:46 -- target/filesystem.sh@25 -- # sync 00:08:08.503 21:03:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.503 21:03:46 -- target/filesystem.sh@27 -- # sync 00:08:08.503 21:03:46 -- target/filesystem.sh@29 -- # i=0 00:08:08.503 21:03:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.503 21:03:46 -- target/filesystem.sh@37 -- # kill -0 2197613 00:08:08.503 21:03:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.503 21:03:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.503 21:03:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.503 21:03:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.503 00:08:08.503 real 0m1.530s 00:08:08.503 user 0m0.030s 00:08:08.503 sys 0m0.067s 00:08:08.503 21:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.503 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.503 ************************************ 00:08:08.503 END TEST filesystem_in_capsule_ext4 00:08:08.503 ************************************ 00:08:08.503 21:03:46 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.503 21:03:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:08.503 21:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.503 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.503 ************************************ 00:08:08.503 START TEST filesystem_in_capsule_btrfs 00:08:08.503 ************************************ 00:08:08.503 21:03:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.503 21:03:46 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.503 21:03:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.503 21:03:46 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.503 21:03:46 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:08.503 21:03:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:08.503 21:03:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:08.503 21:03:46 -- common/autotest_common.sh@905 -- # local force 00:08:08.503 21:03:46 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:08.503 21:03:46 -- common/autotest_common.sh@910 -- # force=-f 00:08:08.503 21:03:46 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.764 btrfs-progs v6.6.2 00:08:08.764 See https://btrfs.readthedocs.io for more information. 00:08:08.764 00:08:08.764 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.764 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.764 this does not affect your deployments: 00:08:08.764 - DUP for metadata (-m dup) 00:08:08.764 - enabled no-holes (-O no-holes) 00:08:08.764 - enabled free-space-tree (-R free-space-tree) 00:08:08.764 00:08:08.764 Label: (null) 00:08:08.764 UUID: 276117f1-03dd-4801-bab7-7ff683b2dbf5 00:08:08.764 Node size: 16384 00:08:08.764 Sector size: 4096 00:08:08.764 Filesystem size: 510.00MiB 00:08:08.764 Block group profiles: 00:08:08.764 Data: single 8.00MiB 00:08:08.764 Metadata: DUP 32.00MiB 00:08:08.764 System: DUP 8.00MiB 00:08:08.764 SSD detected: yes 00:08:08.764 Zoned device: no 00:08:08.764 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.764 Runtime features: free-space-tree 00:08:08.764 Checksum: crc32c 00:08:08.764 Number of devices: 1 00:08:08.764 Devices: 00:08:08.764 ID SIZE PATH 00:08:08.764 1 510.00MiB /dev/nvme0n1p1 00:08:08.764 00:08:08.764 21:03:46 -- common/autotest_common.sh@921 -- # return 0 00:08:08.764 21:03:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.025 21:03:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.025 21:03:47 -- target/filesystem.sh@25 -- # sync 00:08:09.025 21:03:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.025 21:03:47 -- target/filesystem.sh@27 -- # sync 00:08:09.286 21:03:47 -- target/filesystem.sh@29 -- # i=0 00:08:09.286 21:03:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.286 21:03:47 -- target/filesystem.sh@37 -- # kill -0 2197613 00:08:09.286 21:03:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.286 21:03:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.286 21:03:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.286 21:03:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.286 00:08:09.286 real 0m0.633s 00:08:09.286 user 0m0.023s 00:08:09.286 sys 0m0.137s 00:08:09.286 21:03:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.286 21:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.286 ************************************ 00:08:09.286 END TEST filesystem_in_capsule_btrfs 00:08:09.286 ************************************ 00:08:09.286 21:03:47 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.286 21:03:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:09.286 21:03:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.286 21:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.286 ************************************ 00:08:09.286 START TEST filesystem_in_capsule_xfs 00:08:09.286 ************************************ 00:08:09.286 21:03:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.286 21:03:47 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.286 21:03:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.286 21:03:47 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.286 21:03:47 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:09.286 21:03:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:09.286 21:03:47 -- common/autotest_common.sh@904 -- # local i=0 00:08:09.286 21:03:47 -- common/autotest_common.sh@905 -- # local force 00:08:09.286 21:03:47 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:09.286 21:03:47 -- common/autotest_common.sh@910 -- # force=-f 00:08:09.286 21:03:47 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.286 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.286 = sectsz=512 attr=2, projid32bit=1 00:08:09.286 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.286 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.286 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.286 = sunit=0 swidth=0 blks 00:08:09.286 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.286 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.286 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.286 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.242 Discarding blocks...Done. 00:08:10.242 21:03:48 -- common/autotest_common.sh@921 -- # return 0 00:08:10.242 21:03:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.832 21:03:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.832 21:03:50 -- target/filesystem.sh@25 -- # sync 00:08:12.832 21:03:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.832 21:03:50 -- target/filesystem.sh@27 -- # sync 00:08:12.832 21:03:50 -- target/filesystem.sh@29 -- # i=0 00:08:12.832 21:03:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.832 21:03:50 -- target/filesystem.sh@37 -- # kill -0 2197613 00:08:12.832 21:03:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.832 21:03:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.833 21:03:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.833 21:03:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.833 00:08:12.833 real 0m3.628s 00:08:12.833 user 0m0.031s 00:08:12.833 sys 0m0.073s 00:08:12.833 21:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.833 21:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.833 ************************************ 00:08:12.833 END TEST filesystem_in_capsule_xfs 00:08:12.833 ************************************ 00:08:12.833 21:03:50 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.094 21:03:50 -- target/filesystem.sh@93 -- # sync 00:08:13.094 21:03:50 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.094 21:03:51 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.094 21:03:51 -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.094 21:03:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:13.094 21:03:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.094 21:03:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:13.094 21:03:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.094 21:03:51 -- common/autotest_common.sh@1210 -- # return 0 00:08:13.094 21:03:51 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.094 21:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.094 21:03:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.094 21:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.094 21:03:51 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.094 21:03:51 -- target/filesystem.sh@101 -- # killprocess 2197613 00:08:13.094 21:03:51 -- common/autotest_common.sh@926 -- # '[' -z 2197613 ']' 00:08:13.094 21:03:51 -- common/autotest_common.sh@930 -- # kill -0 2197613 00:08:13.094 21:03:51 -- common/autotest_common.sh@931 -- # uname 00:08:13.094 21:03:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:13.094 21:03:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2197613 00:08:13.094 21:03:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:13.094 21:03:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:13.094 21:03:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2197613' 00:08:13.094 killing process with pid 2197613 00:08:13.094 21:03:51 -- common/autotest_common.sh@945 -- # kill 2197613 00:08:13.094 21:03:51 -- common/autotest_common.sh@950 -- # wait 2197613 00:08:13.355 21:03:51 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.355 00:08:13.355 real 0m13.179s 00:08:13.355 user 0m51.957s 00:08:13.355 sys 0m1.148s 00:08:13.355 21:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.355 21:03:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 ************************************ 00:08:13.355 END TEST nvmf_filesystem_in_capsule 00:08:13.355 ************************************ 00:08:13.355 21:03:51 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:13.355 21:03:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:13.355 21:03:51 -- nvmf/common.sh@116 -- # sync 00:08:13.355 21:03:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:13.355 21:03:51 -- nvmf/common.sh@119 -- # set +e 00:08:13.355 21:03:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:13.355 21:03:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:13.355 rmmod nvme_tcp 00:08:13.616 rmmod nvme_fabrics 00:08:13.616 rmmod nvme_keyring 00:08:13.616 21:03:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:13.616 21:03:51 -- nvmf/common.sh@123 -- # set -e 00:08:13.616 21:03:51 -- nvmf/common.sh@124 -- # return 0 00:08:13.616 21:03:51 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:13.616 21:03:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:13.616 21:03:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:13.616 21:03:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:13.616 21:03:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.616 21:03:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:13.616 21:03:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.616 21:03:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.616 21:03:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.635 21:03:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:15.635 00:08:15.635 real 0m34.149s 00:08:15.635 user 1m38.962s 00:08:15.635 sys 0m7.620s 00:08:15.635 21:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.635 21:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:15.635 ************************************ 00:08:15.635 END TEST nvmf_filesystem 00:08:15.635 ************************************ 00:08:15.635 21:03:53 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.635 21:03:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.635 21:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.635 21:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:15.635 ************************************ 00:08:15.635 START TEST nvmf_discovery 00:08:15.635 ************************************ 00:08:15.635 21:03:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.635 * Looking for test storage... 00:08:15.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.635 21:03:53 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.635 21:03:53 -- nvmf/common.sh@7 -- # uname -s 00:08:15.635 21:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.635 21:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.635 21:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.635 21:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.635 21:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.635 21:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.635 21:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.635 21:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.635 21:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.635 21:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.635 21:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.635 21:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.635 21:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.635 21:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.635 21:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.635 21:03:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.635 21:03:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.635 21:03:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.635 21:03:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.635 21:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.635 21:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.635 21:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.635 21:03:53 -- paths/export.sh@5 -- # export PATH 00:08:15.635 21:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.635 21:03:53 -- nvmf/common.sh@46 -- # : 0 00:08:15.635 21:03:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:15.635 21:03:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:15.635 21:03:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:15.635 21:03:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.635 21:03:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.635 21:03:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:15.635 21:03:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:15.635 21:03:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:15.635 21:03:53 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:15.635 21:03:53 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:15.635 21:03:53 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:15.635 21:03:53 -- target/discovery.sh@15 -- # hash nvme 00:08:15.635 21:03:53 -- target/discovery.sh@20 -- # nvmftestinit 00:08:15.635 21:03:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:15.635 21:03:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.635 21:03:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:15.635 21:03:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:15.635 21:03:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:15.635 21:03:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.635 21:03:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.635 21:03:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.897 21:03:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:15.897 21:03:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:15.897 21:03:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:15.897 21:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.488 21:04:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:22.488 21:04:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:22.488 21:04:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:22.488 21:04:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:22.488 21:04:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:22.488 21:04:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:22.488 21:04:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:22.488 21:04:00 -- nvmf/common.sh@294 -- # net_devs=() 00:08:22.488 21:04:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:22.488 21:04:00 -- nvmf/common.sh@295 -- # e810=() 00:08:22.488 21:04:00 -- nvmf/common.sh@295 -- # local -ga e810 00:08:22.488 21:04:00 -- nvmf/common.sh@296 -- # x722=() 00:08:22.488 21:04:00 -- nvmf/common.sh@296 -- # local -ga x722 00:08:22.488 21:04:00 -- nvmf/common.sh@297 -- # mlx=() 00:08:22.488 21:04:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:22.488 21:04:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.488 21:04:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:22.488 21:04:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:22.488 21:04:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:22.488 21:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:22.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:22.488 21:04:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:22.488 21:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:22.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:22.488 21:04:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:22.488 21:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.488 21:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.488 21:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:22.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:22.488 21:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.488 21:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:22.488 21:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.488 21:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.488 21:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:22.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:22.488 21:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.488 21:04:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:22.488 21:04:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:22.488 21:04:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:22.488 21:04:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.488 21:04:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.488 21:04:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.488 21:04:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:22.488 21:04:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.488 21:04:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.488 21:04:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:22.488 21:04:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.488 21:04:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.488 21:04:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:22.488 21:04:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:22.488 21:04:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.488 21:04:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.750 21:04:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.750 21:04:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.750 21:04:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:22.750 21:04:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.750 21:04:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.750 21:04:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.750 21:04:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:22.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:08:22.750 00:08:22.750 --- 10.0.0.2 ping statistics --- 00:08:22.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.750 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:08:22.750 21:04:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:08:22.750 00:08:22.750 --- 10.0.0.1 ping statistics --- 00:08:22.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.750 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:08:22.750 21:04:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.750 21:04:00 -- nvmf/common.sh@410 -- # return 0 00:08:22.750 21:04:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:22.750 21:04:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.750 21:04:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:22.750 21:04:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:22.750 21:04:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.750 21:04:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:22.750 21:04:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:22.750 21:04:00 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:22.750 21:04:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:22.750 21:04:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.750 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:08:22.750 21:04:00 -- nvmf/common.sh@469 -- # nvmfpid=2204671 00:08:22.750 21:04:00 -- nvmf/common.sh@470 -- # waitforlisten 2204671 00:08:22.750 21:04:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.750 21:04:00 -- common/autotest_common.sh@819 -- # '[' -z 2204671 ']' 00:08:22.750 21:04:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.750 21:04:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:22.750 21:04:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.750 21:04:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:22.750 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:08:23.011 [2024-06-08 21:04:00.871764] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:23.011 [2024-06-08 21:04:00.871825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.011 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.011 [2024-06-08 21:04:00.945540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.011 [2024-06-08 21:04:01.018263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:23.011 [2024-06-08 21:04:01.018397] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.011 [2024-06-08 21:04:01.018416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.011 [2024-06-08 21:04:01.018425] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.011 [2024-06-08 21:04:01.018668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.011 [2024-06-08 21:04:01.018790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.011 [2024-06-08 21:04:01.018950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.012 [2024-06-08 21:04:01.018952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.583 21:04:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:23.583 21:04:01 -- common/autotest_common.sh@852 -- # return 0 00:08:23.583 21:04:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:23.583 21:04:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:23.583 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.844 21:04:01 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 [2024-06-08 21:04:01.695577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@26 -- # seq 1 4 00:08:23.844 21:04:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:23.844 21:04:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 Null1 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 [2024-06-08 21:04:01.751887] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:23.844 21:04:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 Null2 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:23.844 21:04:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 Null3 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:23.844 21:04:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 Null4 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:23.844 21:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.844 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 21:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.844 21:04:01 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:24.105 00:08:24.105 Discovery Log Number of Records 6, Generation counter 6 00:08:24.105 =====Discovery Log Entry 0====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: current discovery subsystem 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4420 00:08:24.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: explicit discovery connections, duplicate discovery information 00:08:24.105 sectype: none 00:08:24.105 =====Discovery Log Entry 1====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: nvme subsystem 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4420 00:08:24.105 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: none 00:08:24.105 sectype: none 00:08:24.105 =====Discovery Log Entry 2====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: nvme subsystem 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4420 00:08:24.105 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: none 00:08:24.105 sectype: none 00:08:24.105 =====Discovery Log Entry 3====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: nvme subsystem 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4420 00:08:24.105 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: none 00:08:24.105 sectype: none 00:08:24.105 =====Discovery Log Entry 4====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: nvme subsystem 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4420 00:08:24.105 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: none 00:08:24.105 sectype: none 00:08:24.105 =====Discovery Log Entry 5====== 00:08:24.105 trtype: tcp 00:08:24.105 adrfam: ipv4 00:08:24.105 subtype: discovery subsystem referral 00:08:24.105 treq: not required 00:08:24.105 portid: 0 00:08:24.105 trsvcid: 4430 00:08:24.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.105 traddr: 10.0.0.2 00:08:24.105 eflags: none 00:08:24.105 sectype: none 00:08:24.105 21:04:02 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:24.105 Perform nvmf subsystem discovery via RPC 00:08:24.105 21:04:02 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:24.105 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.105 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 [2024-06-08 21:04:02.056755] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:24.105 [ 00:08:24.105 { 00:08:24.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:24.105 "subtype": "Discovery", 00:08:24.105 "listen_addresses": [ 00:08:24.106 { 00:08:24.106 "transport": "TCP", 00:08:24.106 "trtype": "TCP", 00:08:24.106 "adrfam": "IPv4", 00:08:24.106 "traddr": "10.0.0.2", 00:08:24.106 "trsvcid": "4420" 00:08:24.106 } 00:08:24.106 ], 00:08:24.106 "allow_any_host": true, 00:08:24.106 "hosts": [] 00:08:24.106 }, 00:08:24.106 { 00:08:24.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.106 "subtype": "NVMe", 00:08:24.106 "listen_addresses": [ 00:08:24.106 { 00:08:24.106 "transport": "TCP", 00:08:24.106 "trtype": "TCP", 00:08:24.106 "adrfam": "IPv4", 00:08:24.106 "traddr": "10.0.0.2", 00:08:24.106 "trsvcid": "4420" 00:08:24.106 } 00:08:24.106 ], 00:08:24.106 "allow_any_host": true, 00:08:24.106 "hosts": [], 00:08:24.106 "serial_number": "SPDK00000000000001", 00:08:24.106 "model_number": "SPDK bdev Controller", 00:08:24.106 "max_namespaces": 32, 00:08:24.106 "min_cntlid": 1, 00:08:24.106 "max_cntlid": 65519, 00:08:24.106 "namespaces": [ 00:08:24.106 { 00:08:24.106 "nsid": 1, 00:08:24.106 "bdev_name": "Null1", 00:08:24.106 "name": "Null1", 00:08:24.106 "nguid": "20D83532D21F42E3BEDBF090779D61B1", 00:08:24.106 "uuid": "20d83532-d21f-42e3-bedb-f090779d61b1" 00:08:24.106 } 00:08:24.106 ] 00:08:24.106 }, 00:08:24.106 { 00:08:24.106 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:24.106 "subtype": "NVMe", 00:08:24.106 "listen_addresses": [ 00:08:24.106 { 00:08:24.106 "transport": "TCP", 00:08:24.106 "trtype": "TCP", 00:08:24.106 "adrfam": "IPv4", 00:08:24.106 "traddr": "10.0.0.2", 00:08:24.106 "trsvcid": "4420" 00:08:24.106 } 00:08:24.106 ], 00:08:24.106 "allow_any_host": true, 00:08:24.106 "hosts": [], 00:08:24.106 "serial_number": "SPDK00000000000002", 00:08:24.106 "model_number": "SPDK bdev Controller", 00:08:24.106 "max_namespaces": 32, 00:08:24.106 "min_cntlid": 1, 00:08:24.106 "max_cntlid": 65519, 00:08:24.106 "namespaces": [ 00:08:24.106 { 00:08:24.106 "nsid": 1, 00:08:24.106 "bdev_name": "Null2", 00:08:24.106 "name": "Null2", 00:08:24.106 "nguid": "22473B710C6C4EF7BDCC7D3AA010533D", 00:08:24.106 "uuid": "22473b71-0c6c-4ef7-bdcc-7d3aa010533d" 00:08:24.106 } 00:08:24.106 ] 00:08:24.106 }, 00:08:24.106 { 00:08:24.106 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:24.106 "subtype": "NVMe", 00:08:24.106 "listen_addresses": [ 00:08:24.106 { 00:08:24.106 "transport": "TCP", 00:08:24.106 "trtype": "TCP", 00:08:24.106 "adrfam": "IPv4", 00:08:24.106 "traddr": "10.0.0.2", 00:08:24.106 "trsvcid": "4420" 00:08:24.106 } 00:08:24.106 ], 00:08:24.106 "allow_any_host": true, 00:08:24.106 "hosts": [], 00:08:24.106 "serial_number": "SPDK00000000000003", 00:08:24.106 "model_number": "SPDK bdev Controller", 00:08:24.106 "max_namespaces": 32, 00:08:24.106 "min_cntlid": 1, 00:08:24.106 "max_cntlid": 65519, 00:08:24.106 "namespaces": [ 00:08:24.106 { 00:08:24.106 "nsid": 1, 00:08:24.106 "bdev_name": "Null3", 00:08:24.106 "name": "Null3", 00:08:24.106 "nguid": "D9174E7E14C64909A8B5C80A51FFEB17", 00:08:24.106 "uuid": "d9174e7e-14c6-4909-a8b5-c80a51ffeb17" 00:08:24.106 } 00:08:24.106 ] 00:08:24.106 }, 00:08:24.106 { 00:08:24.106 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:24.106 "subtype": "NVMe", 00:08:24.106 "listen_addresses": [ 00:08:24.106 { 00:08:24.106 "transport": "TCP", 00:08:24.106 "trtype": "TCP", 00:08:24.106 "adrfam": "IPv4", 00:08:24.106 "traddr": "10.0.0.2", 00:08:24.106 "trsvcid": "4420" 00:08:24.106 } 00:08:24.106 ], 00:08:24.106 "allow_any_host": true, 00:08:24.106 "hosts": [], 00:08:24.106 "serial_number": "SPDK00000000000004", 00:08:24.106 "model_number": "SPDK bdev Controller", 00:08:24.106 "max_namespaces": 32, 00:08:24.106 "min_cntlid": 1, 00:08:24.106 "max_cntlid": 65519, 00:08:24.106 "namespaces": [ 00:08:24.106 { 00:08:24.106 "nsid": 1, 00:08:24.106 "bdev_name": "Null4", 00:08:24.106 "name": "Null4", 00:08:24.106 "nguid": "F6F1C53B13FA459F8E2E6B2FBFD49F04", 00:08:24.106 "uuid": "f6f1c53b-13fa-459f-8e2e-6b2fbfd49f04" 00:08:24.106 } 00:08:24.106 ] 00:08:24.106 } 00:08:24.106 ] 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@42 -- # seq 1 4 00:08:24.106 21:04:02 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.106 21:04:02 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.106 21:04:02 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.106 21:04:02 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.106 21:04:02 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.106 21:04:02 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:24.106 21:04:02 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:24.106 21:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.106 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.106 21:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.368 21:04:02 -- target/discovery.sh@49 -- # check_bdevs= 00:08:24.368 21:04:02 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:24.368 21:04:02 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:24.368 21:04:02 -- target/discovery.sh@57 -- # nvmftestfini 00:08:24.368 21:04:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:24.368 21:04:02 -- nvmf/common.sh@116 -- # sync 00:08:24.368 21:04:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:24.368 21:04:02 -- nvmf/common.sh@119 -- # set +e 00:08:24.368 21:04:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:24.368 21:04:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:24.368 rmmod nvme_tcp 00:08:24.368 rmmod nvme_fabrics 00:08:24.368 rmmod nvme_keyring 00:08:24.368 21:04:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:24.368 21:04:02 -- nvmf/common.sh@123 -- # set -e 00:08:24.368 21:04:02 -- nvmf/common.sh@124 -- # return 0 00:08:24.368 21:04:02 -- nvmf/common.sh@477 -- # '[' -n 2204671 ']' 00:08:24.368 21:04:02 -- nvmf/common.sh@478 -- # killprocess 2204671 00:08:24.368 21:04:02 -- common/autotest_common.sh@926 -- # '[' -z 2204671 ']' 00:08:24.368 21:04:02 -- common/autotest_common.sh@930 -- # kill -0 2204671 00:08:24.368 21:04:02 -- common/autotest_common.sh@931 -- # uname 00:08:24.368 21:04:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:24.368 21:04:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2204671 00:08:24.368 21:04:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:24.368 21:04:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:24.368 21:04:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2204671' 00:08:24.368 killing process with pid 2204671 00:08:24.368 21:04:02 -- common/autotest_common.sh@945 -- # kill 2204671 00:08:24.368 [2024-06-08 21:04:02.350021] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:24.368 21:04:02 -- common/autotest_common.sh@950 -- # wait 2204671 00:08:24.628 21:04:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:24.628 21:04:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:24.628 21:04:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:24.628 21:04:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.628 21:04:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:24.628 21:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.628 21:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.628 21:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.540 21:04:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:26.540 00:08:26.540 real 0m10.950s 00:08:26.540 user 0m8.054s 00:08:26.540 sys 0m5.572s 00:08:26.540 21:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.540 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:26.540 ************************************ 00:08:26.540 END TEST nvmf_discovery 00:08:26.540 ************************************ 00:08:26.540 21:04:04 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:26.540 21:04:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:26.540 21:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.540 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:26.540 ************************************ 00:08:26.540 START TEST nvmf_referrals 00:08:26.540 ************************************ 00:08:26.541 21:04:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:26.802 * Looking for test storage... 00:08:26.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.802 21:04:04 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.802 21:04:04 -- nvmf/common.sh@7 -- # uname -s 00:08:26.802 21:04:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.802 21:04:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.802 21:04:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.802 21:04:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.802 21:04:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.802 21:04:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.802 21:04:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.802 21:04:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.802 21:04:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.802 21:04:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.802 21:04:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:26.802 21:04:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:26.802 21:04:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.802 21:04:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.802 21:04:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.802 21:04:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.802 21:04:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.802 21:04:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.802 21:04:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.802 21:04:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.802 21:04:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.802 21:04:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.802 21:04:04 -- paths/export.sh@5 -- # export PATH 00:08:26.802 21:04:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.802 21:04:04 -- nvmf/common.sh@46 -- # : 0 00:08:26.802 21:04:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:26.802 21:04:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:26.802 21:04:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:26.802 21:04:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.802 21:04:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.802 21:04:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:26.802 21:04:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:26.802 21:04:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:26.802 21:04:04 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:26.802 21:04:04 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:26.802 21:04:04 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:26.802 21:04:04 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:26.802 21:04:04 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:26.802 21:04:04 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:26.802 21:04:04 -- target/referrals.sh@37 -- # nvmftestinit 00:08:26.802 21:04:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:26.802 21:04:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.802 21:04:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:26.802 21:04:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:26.802 21:04:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:26.802 21:04:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.802 21:04:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.802 21:04:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.802 21:04:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:26.802 21:04:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:26.802 21:04:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:26.802 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.392 21:04:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:33.392 21:04:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:33.392 21:04:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:33.392 21:04:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:33.392 21:04:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:33.392 21:04:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:33.392 21:04:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:33.392 21:04:11 -- nvmf/common.sh@294 -- # net_devs=() 00:08:33.392 21:04:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:33.392 21:04:11 -- nvmf/common.sh@295 -- # e810=() 00:08:33.392 21:04:11 -- nvmf/common.sh@295 -- # local -ga e810 00:08:33.392 21:04:11 -- nvmf/common.sh@296 -- # x722=() 00:08:33.392 21:04:11 -- nvmf/common.sh@296 -- # local -ga x722 00:08:33.392 21:04:11 -- nvmf/common.sh@297 -- # mlx=() 00:08:33.392 21:04:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:33.392 21:04:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.392 21:04:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:33.392 21:04:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:33.392 21:04:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:33.392 21:04:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:33.392 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:33.392 21:04:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:33.392 21:04:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:33.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:33.392 21:04:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:33.392 21:04:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.392 21:04:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.392 21:04:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:33.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:33.392 21:04:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.392 21:04:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:33.392 21:04:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.392 21:04:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.392 21:04:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:33.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:33.392 21:04:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.392 21:04:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:33.392 21:04:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:33.392 21:04:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:33.392 21:04:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.392 21:04:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.392 21:04:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.392 21:04:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:33.392 21:04:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.392 21:04:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.392 21:04:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:33.392 21:04:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.392 21:04:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.392 21:04:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:33.392 21:04:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:33.392 21:04:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.392 21:04:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.655 21:04:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.655 21:04:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.655 21:04:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:33.655 21:04:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.655 21:04:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.655 21:04:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.655 21:04:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:33.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:33.655 00:08:33.655 --- 10.0.0.2 ping statistics --- 00:08:33.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.655 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:33.655 21:04:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:08:33.655 00:08:33.655 --- 10.0.0.1 ping statistics --- 00:08:33.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.655 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:08:33.655 21:04:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.655 21:04:11 -- nvmf/common.sh@410 -- # return 0 00:08:33.655 21:04:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:33.655 21:04:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.655 21:04:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:33.655 21:04:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:33.655 21:04:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.655 21:04:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:33.655 21:04:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:33.915 21:04:11 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:33.915 21:04:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:33.915 21:04:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.915 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:33.915 21:04:11 -- nvmf/common.sh@469 -- # nvmfpid=2209251 00:08:33.915 21:04:11 -- nvmf/common.sh@470 -- # waitforlisten 2209251 00:08:33.915 21:04:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.915 21:04:11 -- common/autotest_common.sh@819 -- # '[' -z 2209251 ']' 00:08:33.915 21:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.915 21:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.915 21:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.915 21:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.915 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:33.915 [2024-06-08 21:04:11.807830] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:33.915 [2024-06-08 21:04:11.807886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.915 [2024-06-08 21:04:11.876894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.915 [2024-06-08 21:04:11.941200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.915 [2024-06-08 21:04:11.941327] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.915 [2024-06-08 21:04:11.941337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.915 [2024-06-08 21:04:11.941345] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.915 [2024-06-08 21:04:11.941481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.915 [2024-06-08 21:04:11.941755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.915 [2024-06-08 21:04:11.941911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.915 [2024-06-08 21:04:11.941911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.486 21:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.486 21:04:12 -- common/autotest_common.sh@852 -- # return 0 00:08:34.486 21:04:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:34.486 21:04:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.486 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.748 21:04:12 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 [2024-06-08 21:04:12.613608] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 [2024-06-08 21:04:12.629768] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.748 21:04:12 -- target/referrals.sh@48 -- # jq length 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:34.748 21:04:12 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:34.748 21:04:12 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.748 21:04:12 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.748 21:04:12 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.748 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.748 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 21:04:12 -- target/referrals.sh@21 -- # sort 00:08:34.748 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:34.748 21:04:12 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:34.748 21:04:12 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:34.748 21:04:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.748 21:04:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.748 21:04:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.748 21:04:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.748 21:04:12 -- target/referrals.sh@26 -- # sort 00:08:35.010 21:04:12 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:35.010 21:04:12 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:35.010 21:04:12 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:35.010 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.010 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.010 21:04:12 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:35.010 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.010 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.010 21:04:12 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:35.010 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.010 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.010 21:04:12 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.010 21:04:12 -- target/referrals.sh@56 -- # jq length 00:08:35.010 21:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.010 21:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 21:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.010 21:04:12 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:35.010 21:04:12 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:35.010 21:04:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.010 21:04:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.010 21:04:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.010 21:04:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.010 21:04:12 -- target/referrals.sh@26 -- # sort 00:08:35.271 21:04:13 -- target/referrals.sh@26 -- # echo 00:08:35.271 21:04:13 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:35.271 21:04:13 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:35.271 21:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.271 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.271 21:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.271 21:04:13 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:35.271 21:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.271 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.271 21:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.271 21:04:13 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:35.271 21:04:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:35.271 21:04:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.271 21:04:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:35.271 21:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.271 21:04:13 -- target/referrals.sh@21 -- # sort 00:08:35.271 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.271 21:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.271 21:04:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:35.271 21:04:13 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:35.271 21:04:13 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:35.271 21:04:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.271 21:04:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.271 21:04:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.271 21:04:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.271 21:04:13 -- target/referrals.sh@26 -- # sort 00:08:35.271 21:04:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:35.531 21:04:13 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:35.531 21:04:13 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:35.531 21:04:13 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:35.531 21:04:13 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:35.531 21:04:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.531 21:04:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:35.531 21:04:13 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:35.531 21:04:13 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:35.531 21:04:13 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:35.531 21:04:13 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:35.531 21:04:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.531 21:04:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:35.531 21:04:13 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:35.531 21:04:13 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:35.531 21:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.531 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.531 21:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.531 21:04:13 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:35.531 21:04:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:35.531 21:04:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.531 21:04:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:35.531 21:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.531 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.531 21:04:13 -- target/referrals.sh@21 -- # sort 00:08:35.531 21:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.531 21:04:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:35.531 21:04:13 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:35.792 21:04:13 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:35.792 21:04:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.792 21:04:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.792 21:04:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.792 21:04:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.792 21:04:13 -- target/referrals.sh@26 -- # sort 00:08:35.792 21:04:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:35.792 21:04:13 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:35.792 21:04:13 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:35.792 21:04:13 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:35.792 21:04:13 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:35.792 21:04:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.792 21:04:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.052 21:04:13 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:36.052 21:04:13 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.052 21:04:13 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:36.052 21:04:13 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.052 21:04:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.052 21:04:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.052 21:04:14 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.052 21:04:14 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:36.052 21:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.052 21:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:36.052 21:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.052 21:04:14 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.052 21:04:14 -- target/referrals.sh@82 -- # jq length 00:08:36.052 21:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.052 21:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:36.052 21:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.052 21:04:14 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:36.312 21:04:14 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:36.312 21:04:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.312 21:04:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.312 21:04:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.312 21:04:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.312 21:04:14 -- target/referrals.sh@26 -- # sort 00:08:36.312 21:04:14 -- target/referrals.sh@26 -- # echo 00:08:36.312 21:04:14 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:36.312 21:04:14 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:36.312 21:04:14 -- target/referrals.sh@86 -- # nvmftestfini 00:08:36.312 21:04:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:36.312 21:04:14 -- nvmf/common.sh@116 -- # sync 00:08:36.312 21:04:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:36.312 21:04:14 -- nvmf/common.sh@119 -- # set +e 00:08:36.312 21:04:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:36.312 21:04:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:36.312 rmmod nvme_tcp 00:08:36.312 rmmod nvme_fabrics 00:08:36.312 rmmod nvme_keyring 00:08:36.312 21:04:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:36.312 21:04:14 -- nvmf/common.sh@123 -- # set -e 00:08:36.312 21:04:14 -- nvmf/common.sh@124 -- # return 0 00:08:36.312 21:04:14 -- nvmf/common.sh@477 -- # '[' -n 2209251 ']' 00:08:36.312 21:04:14 -- nvmf/common.sh@478 -- # killprocess 2209251 00:08:36.312 21:04:14 -- common/autotest_common.sh@926 -- # '[' -z 2209251 ']' 00:08:36.312 21:04:14 -- common/autotest_common.sh@930 -- # kill -0 2209251 00:08:36.312 21:04:14 -- common/autotest_common.sh@931 -- # uname 00:08:36.312 21:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:36.312 21:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2209251 00:08:36.312 21:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:36.312 21:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:36.312 21:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2209251' 00:08:36.312 killing process with pid 2209251 00:08:36.312 21:04:14 -- common/autotest_common.sh@945 -- # kill 2209251 00:08:36.313 21:04:14 -- common/autotest_common.sh@950 -- # wait 2209251 00:08:36.572 21:04:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:36.572 21:04:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:36.572 21:04:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:36.572 21:04:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.572 21:04:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:36.572 21:04:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.573 21:04:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.573 21:04:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.487 21:04:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:38.487 00:08:38.487 real 0m11.980s 00:08:38.487 user 0m13.156s 00:08:38.487 sys 0m5.807s 00:08:38.487 21:04:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.487 21:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:38.487 ************************************ 00:08:38.487 END TEST nvmf_referrals 00:08:38.487 ************************************ 00:08:38.751 21:04:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:38.751 21:04:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:38.751 21:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.751 21:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:38.751 ************************************ 00:08:38.751 START TEST nvmf_connect_disconnect 00:08:38.751 ************************************ 00:08:38.751 21:04:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:38.751 * Looking for test storage... 00:08:38.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.751 21:04:16 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.751 21:04:16 -- nvmf/common.sh@7 -- # uname -s 00:08:38.751 21:04:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.751 21:04:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.751 21:04:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.751 21:04:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.751 21:04:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.751 21:04:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.751 21:04:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.751 21:04:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.751 21:04:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.751 21:04:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.751 21:04:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.751 21:04:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.751 21:04:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.751 21:04:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.751 21:04:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.751 21:04:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.751 21:04:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.751 21:04:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.751 21:04:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.751 21:04:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.751 21:04:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.751 21:04:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.751 21:04:16 -- paths/export.sh@5 -- # export PATH 00:08:38.751 21:04:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.751 21:04:16 -- nvmf/common.sh@46 -- # : 0 00:08:38.751 21:04:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.751 21:04:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.751 21:04:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.751 21:04:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.751 21:04:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.751 21:04:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.751 21:04:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.751 21:04:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.751 21:04:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.751 21:04:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.751 21:04:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:38.751 21:04:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.751 21:04:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.751 21:04:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.751 21:04:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.751 21:04:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.751 21:04:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.751 21:04:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.751 21:04:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.751 21:04:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:38.751 21:04:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:38.751 21:04:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:38.751 21:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.890 21:04:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.890 21:04:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.890 21:04:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.890 21:04:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.890 21:04:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.890 21:04:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.890 21:04:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.890 21:04:23 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.890 21:04:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.891 21:04:23 -- nvmf/common.sh@295 -- # e810=() 00:08:46.891 21:04:23 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.891 21:04:23 -- nvmf/common.sh@296 -- # x722=() 00:08:46.891 21:04:23 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.891 21:04:23 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.891 21:04:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.891 21:04:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.891 21:04:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.891 21:04:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:46.891 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:46.891 21:04:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.891 21:04:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:46.891 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:46.891 21:04:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.891 21:04:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.891 21:04:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.891 21:04:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:46.891 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:46.891 21:04:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.891 21:04:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.891 21:04:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.891 21:04:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:46.891 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:46.891 21:04:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.891 21:04:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.891 21:04:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.891 21:04:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.891 21:04:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.891 21:04:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.891 21:04:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.891 21:04:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.891 21:04:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.891 21:04:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.891 21:04:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.891 21:04:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.891 21:04:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.891 21:04:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.891 21:04:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.891 21:04:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.891 21:04:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.891 21:04:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.891 21:04:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.891 21:04:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.891 21:04:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:08:46.891 00:08:46.891 --- 10.0.0.2 ping statistics --- 00:08:46.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.891 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:08:46.891 21:04:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:08:46.891 00:08:46.891 --- 10.0.0.1 ping statistics --- 00:08:46.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.891 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:08:46.891 21:04:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.891 21:04:23 -- nvmf/common.sh@410 -- # return 0 00:08:46.891 21:04:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.891 21:04:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.891 21:04:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.891 21:04:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.891 21:04:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.891 21:04:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.891 21:04:23 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:46.891 21:04:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.891 21:04:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.891 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.891 21:04:23 -- nvmf/common.sh@469 -- # nvmfpid=2214038 00:08:46.891 21:04:23 -- nvmf/common.sh@470 -- # waitforlisten 2214038 00:08:46.891 21:04:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.891 21:04:23 -- common/autotest_common.sh@819 -- # '[' -z 2214038 ']' 00:08:46.891 21:04:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.891 21:04:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.891 21:04:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.891 21:04:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.891 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.891 [2024-06-08 21:04:24.011352] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:46.891 [2024-06-08 21:04:24.011444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.891 [2024-06-08 21:04:24.081095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.891 [2024-06-08 21:04:24.153440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.891 [2024-06-08 21:04:24.153578] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.891 [2024-06-08 21:04:24.153588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.891 [2024-06-08 21:04:24.153597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.891 [2024-06-08 21:04:24.153756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.891 [2024-06-08 21:04:24.153873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.891 [2024-06-08 21:04:24.154033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.891 [2024-06-08 21:04:24.154034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.891 21:04:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.891 21:04:24 -- common/autotest_common.sh@852 -- # return 0 00:08:46.891 21:04:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.891 21:04:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.891 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.891 21:04:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.891 21:04:24 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:46.891 21:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.891 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.892 [2024-06-08 21:04:24.827621] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.892 21:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:46.892 21:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.892 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.892 21:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.892 21:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.892 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.892 21:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.892 21:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.892 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.892 21:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.892 21:04:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.892 21:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.892 [2024-06-08 21:04:24.887062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.892 21:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:46.892 21:04:24 -- target/connect_disconnect.sh@34 -- # set +x 00:08:49.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.166 21:08:17 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:39.166 21:08:17 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:39.166 21:08:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:39.166 21:08:17 -- nvmf/common.sh@116 -- # sync 00:12:39.166 21:08:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:39.166 21:08:17 -- nvmf/common.sh@119 -- # set +e 00:12:39.166 21:08:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:39.166 21:08:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:39.166 rmmod nvme_tcp 00:12:39.166 rmmod nvme_fabrics 00:12:39.166 rmmod nvme_keyring 00:12:39.166 21:08:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:39.166 21:08:17 -- nvmf/common.sh@123 -- # set -e 00:12:39.166 21:08:17 -- nvmf/common.sh@124 -- # return 0 00:12:39.166 21:08:17 -- nvmf/common.sh@477 -- # '[' -n 2214038 ']' 00:12:39.166 21:08:17 -- nvmf/common.sh@478 -- # killprocess 2214038 00:12:39.166 21:08:17 -- common/autotest_common.sh@926 -- # '[' -z 2214038 ']' 00:12:39.166 21:08:17 -- common/autotest_common.sh@930 -- # kill -0 2214038 00:12:39.166 21:08:17 -- common/autotest_common.sh@931 -- # uname 00:12:39.166 21:08:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:39.166 21:08:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2214038 00:12:39.166 21:08:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:39.166 21:08:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:39.166 21:08:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2214038' 00:12:39.166 killing process with pid 2214038 00:12:39.166 21:08:17 -- common/autotest_common.sh@945 -- # kill 2214038 00:12:39.166 21:08:17 -- common/autotest_common.sh@950 -- # wait 2214038 00:12:39.427 21:08:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:39.427 21:08:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:39.427 21:08:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:39.427 21:08:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.427 21:08:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:39.427 21:08:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.427 21:08:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.427 21:08:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.341 21:08:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:41.341 00:12:41.341 real 4m2.806s 00:12:41.341 user 15m26.795s 00:12:41.341 sys 0m21.771s 00:12:41.341 21:08:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.341 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:12:41.341 ************************************ 00:12:41.341 END TEST nvmf_connect_disconnect 00:12:41.341 ************************************ 00:12:41.602 21:08:19 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.602 21:08:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:41.602 21:08:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:41.602 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:12:41.602 ************************************ 00:12:41.602 START TEST nvmf_multitarget 00:12:41.602 ************************************ 00:12:41.602 21:08:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.602 * Looking for test storage... 00:12:41.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.602 21:08:19 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.602 21:08:19 -- nvmf/common.sh@7 -- # uname -s 00:12:41.602 21:08:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.602 21:08:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.602 21:08:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.602 21:08:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.602 21:08:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.602 21:08:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.602 21:08:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.602 21:08:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.602 21:08:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.602 21:08:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.602 21:08:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.603 21:08:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.603 21:08:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.603 21:08:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.603 21:08:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.603 21:08:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.603 21:08:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.603 21:08:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.603 21:08:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.603 21:08:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.603 21:08:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.603 21:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.603 21:08:19 -- paths/export.sh@5 -- # export PATH 00:12:41.603 21:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.603 21:08:19 -- nvmf/common.sh@46 -- # : 0 00:12:41.603 21:08:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.603 21:08:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.603 21:08:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.603 21:08:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.603 21:08:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.603 21:08:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.603 21:08:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.603 21:08:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.603 21:08:19 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:41.603 21:08:19 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:41.603 21:08:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.603 21:08:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.603 21:08:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.603 21:08:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.603 21:08:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.603 21:08:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.603 21:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.603 21:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.603 21:08:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:41.603 21:08:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:41.603 21:08:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:41.603 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 21:08:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:48.196 21:08:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:48.196 21:08:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:48.196 21:08:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:48.196 21:08:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:48.196 21:08:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:48.196 21:08:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:48.196 21:08:26 -- nvmf/common.sh@294 -- # net_devs=() 00:12:48.196 21:08:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:48.196 21:08:26 -- nvmf/common.sh@295 -- # e810=() 00:12:48.196 21:08:26 -- nvmf/common.sh@295 -- # local -ga e810 00:12:48.196 21:08:26 -- nvmf/common.sh@296 -- # x722=() 00:12:48.196 21:08:26 -- nvmf/common.sh@296 -- # local -ga x722 00:12:48.196 21:08:26 -- nvmf/common.sh@297 -- # mlx=() 00:12:48.196 21:08:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:48.196 21:08:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.196 21:08:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:48.196 21:08:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:48.196 21:08:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:48.196 21:08:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:48.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:48.196 21:08:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:48.196 21:08:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:48.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:48.196 21:08:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:48.196 21:08:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.196 21:08:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.196 21:08:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:48.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:48.196 21:08:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.196 21:08:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:48.196 21:08:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.196 21:08:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.196 21:08:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:48.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:48.196 21:08:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.196 21:08:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:48.196 21:08:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:48.196 21:08:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:48.196 21:08:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.196 21:08:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.196 21:08:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.196 21:08:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:48.196 21:08:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.196 21:08:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.196 21:08:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:48.196 21:08:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.196 21:08:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.196 21:08:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:48.196 21:08:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:48.196 21:08:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.196 21:08:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.458 21:08:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.458 21:08:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.458 21:08:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:48.458 21:08:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.458 21:08:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.458 21:08:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.458 21:08:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:48.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:12:48.458 00:12:48.458 --- 10.0.0.2 ping statistics --- 00:12:48.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.458 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:12:48.458 21:08:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:12:48.458 00:12:48.458 --- 10.0.0.1 ping statistics --- 00:12:48.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.458 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:12:48.458 21:08:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.458 21:08:26 -- nvmf/common.sh@410 -- # return 0 00:12:48.458 21:08:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:48.458 21:08:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.458 21:08:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:48.458 21:08:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:48.458 21:08:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.458 21:08:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:48.458 21:08:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:48.458 21:08:26 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.458 21:08:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:48.458 21:08:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:48.458 21:08:26 -- common/autotest_common.sh@10 -- # set +x 00:12:48.458 21:08:26 -- nvmf/common.sh@469 -- # nvmfpid=2266284 00:12:48.458 21:08:26 -- nvmf/common.sh@470 -- # waitforlisten 2266284 00:12:48.458 21:08:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.458 21:08:26 -- common/autotest_common.sh@819 -- # '[' -z 2266284 ']' 00:12:48.458 21:08:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.458 21:08:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:48.458 21:08:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.458 21:08:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:48.458 21:08:26 -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 [2024-06-08 21:08:26.591561] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:48.719 [2024-06-08 21:08:26.591648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.719 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.719 [2024-06-08 21:08:26.665910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.719 [2024-06-08 21:08:26.739490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.719 [2024-06-08 21:08:26.739624] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.719 [2024-06-08 21:08:26.739634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.719 [2024-06-08 21:08:26.739642] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.719 [2024-06-08 21:08:26.739785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.719 [2024-06-08 21:08:26.739907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.719 [2024-06-08 21:08:26.740064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.719 [2024-06-08 21:08:26.740065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.291 21:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:49.291 21:08:27 -- common/autotest_common.sh@852 -- # return 0 00:12:49.291 21:08:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.291 21:08:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:49.291 21:08:27 -- common/autotest_common.sh@10 -- # set +x 00:12:49.552 21:08:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.552 21:08:27 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.552 21:08:27 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.552 21:08:27 -- target/multitarget.sh@21 -- # jq length 00:12:49.552 21:08:27 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:49.552 21:08:27 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:49.552 "nvmf_tgt_1" 00:12:49.552 21:08:27 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:49.812 "nvmf_tgt_2" 00:12:49.812 21:08:27 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.812 21:08:27 -- target/multitarget.sh@28 -- # jq length 00:12:49.812 21:08:27 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.812 21:08:27 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.812 true 00:12:50.073 21:08:27 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:50.073 true 00:12:50.073 21:08:28 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:50.073 21:08:28 -- target/multitarget.sh@35 -- # jq length 00:12:50.073 21:08:28 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:50.073 21:08:28 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:50.073 21:08:28 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:50.073 21:08:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:50.073 21:08:28 -- nvmf/common.sh@116 -- # sync 00:12:50.073 21:08:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:50.073 21:08:28 -- nvmf/common.sh@119 -- # set +e 00:12:50.073 21:08:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:50.073 21:08:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:50.073 rmmod nvme_tcp 00:12:50.073 rmmod nvme_fabrics 00:12:50.073 rmmod nvme_keyring 00:12:50.334 21:08:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:50.334 21:08:28 -- nvmf/common.sh@123 -- # set -e 00:12:50.334 21:08:28 -- nvmf/common.sh@124 -- # return 0 00:12:50.334 21:08:28 -- nvmf/common.sh@477 -- # '[' -n 2266284 ']' 00:12:50.334 21:08:28 -- nvmf/common.sh@478 -- # killprocess 2266284 00:12:50.334 21:08:28 -- common/autotest_common.sh@926 -- # '[' -z 2266284 ']' 00:12:50.334 21:08:28 -- common/autotest_common.sh@930 -- # kill -0 2266284 00:12:50.334 21:08:28 -- common/autotest_common.sh@931 -- # uname 00:12:50.334 21:08:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:50.334 21:08:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2266284 00:12:50.334 21:08:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:50.334 21:08:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:50.334 21:08:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2266284' 00:12:50.334 killing process with pid 2266284 00:12:50.334 21:08:28 -- common/autotest_common.sh@945 -- # kill 2266284 00:12:50.334 21:08:28 -- common/autotest_common.sh@950 -- # wait 2266284 00:12:50.334 21:08:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:50.334 21:08:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:50.334 21:08:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:50.334 21:08:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.334 21:08:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:50.334 21:08:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.334 21:08:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.334 21:08:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.918 21:08:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:52.918 00:12:52.918 real 0m10.971s 00:12:52.918 user 0m9.284s 00:12:52.918 sys 0m5.488s 00:12:52.918 21:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.918 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:52.918 ************************************ 00:12:52.918 END TEST nvmf_multitarget 00:12:52.918 ************************************ 00:12:52.918 21:08:30 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.918 21:08:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:52.918 21:08:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.918 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:52.918 ************************************ 00:12:52.918 START TEST nvmf_rpc 00:12:52.918 ************************************ 00:12:52.918 21:08:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.918 * Looking for test storage... 00:12:52.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.918 21:08:30 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.918 21:08:30 -- nvmf/common.sh@7 -- # uname -s 00:12:52.918 21:08:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.918 21:08:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.918 21:08:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.918 21:08:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.918 21:08:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.918 21:08:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.918 21:08:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.918 21:08:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.918 21:08:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.918 21:08:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.918 21:08:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:52.918 21:08:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:52.918 21:08:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.918 21:08:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.918 21:08:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.918 21:08:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.918 21:08:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.918 21:08:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.918 21:08:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.918 21:08:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.918 21:08:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.918 21:08:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.918 21:08:30 -- paths/export.sh@5 -- # export PATH 00:12:52.918 21:08:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.918 21:08:30 -- nvmf/common.sh@46 -- # : 0 00:12:52.918 21:08:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.918 21:08:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.918 21:08:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.918 21:08:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.918 21:08:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.918 21:08:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.918 21:08:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.918 21:08:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.918 21:08:30 -- target/rpc.sh@11 -- # loops=5 00:12:52.918 21:08:30 -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.918 21:08:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.918 21:08:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.918 21:08:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.918 21:08:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.918 21:08:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.918 21:08:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.918 21:08:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.918 21:08:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.918 21:08:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:52.918 21:08:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:52.918 21:08:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:52.918 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.510 21:08:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:59.510 21:08:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:59.510 21:08:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:59.510 21:08:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:59.510 21:08:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:59.510 21:08:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:59.510 21:08:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:59.510 21:08:37 -- nvmf/common.sh@294 -- # net_devs=() 00:12:59.510 21:08:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:59.510 21:08:37 -- nvmf/common.sh@295 -- # e810=() 00:12:59.510 21:08:37 -- nvmf/common.sh@295 -- # local -ga e810 00:12:59.510 21:08:37 -- nvmf/common.sh@296 -- # x722=() 00:12:59.510 21:08:37 -- nvmf/common.sh@296 -- # local -ga x722 00:12:59.510 21:08:37 -- nvmf/common.sh@297 -- # mlx=() 00:12:59.510 21:08:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:59.510 21:08:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.510 21:08:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:59.510 21:08:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:59.510 21:08:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:59.510 21:08:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:59.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:59.510 21:08:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:59.510 21:08:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:59.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:59.510 21:08:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:59.510 21:08:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.510 21:08:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.510 21:08:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:59.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:59.510 21:08:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.510 21:08:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:59.510 21:08:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.510 21:08:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.510 21:08:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:59.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:59.510 21:08:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.510 21:08:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:59.510 21:08:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:59.510 21:08:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:59.510 21:08:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.510 21:08:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.510 21:08:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.510 21:08:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:59.510 21:08:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.510 21:08:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.510 21:08:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:59.510 21:08:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.510 21:08:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.510 21:08:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:59.510 21:08:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:59.510 21:08:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.510 21:08:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.510 21:08:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.510 21:08:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.510 21:08:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:59.511 21:08:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.511 21:08:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.511 21:08:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.511 21:08:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:59.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:12:59.511 00:12:59.511 --- 10.0.0.2 ping statistics --- 00:12:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.511 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:12:59.511 21:08:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:12:59.511 00:12:59.511 --- 10.0.0.1 ping statistics --- 00:12:59.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.511 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:12:59.511 21:08:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.511 21:08:37 -- nvmf/common.sh@410 -- # return 0 00:12:59.511 21:08:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.511 21:08:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.511 21:08:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.511 21:08:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.511 21:08:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.511 21:08:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.511 21:08:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.771 21:08:37 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:59.771 21:08:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:59.771 21:08:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:59.771 21:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:59.771 21:08:37 -- nvmf/common.sh@469 -- # nvmfpid=2270795 00:12:59.771 21:08:37 -- nvmf/common.sh@470 -- # waitforlisten 2270795 00:12:59.771 21:08:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.771 21:08:37 -- common/autotest_common.sh@819 -- # '[' -z 2270795 ']' 00:12:59.771 21:08:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.771 21:08:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:59.771 21:08:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.771 21:08:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:59.771 21:08:37 -- common/autotest_common.sh@10 -- # set +x 00:12:59.771 [2024-06-08 21:08:37.678101] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:59.771 [2024-06-08 21:08:37.678194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.771 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.771 [2024-06-08 21:08:37.752410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.771 [2024-06-08 21:08:37.826621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.771 [2024-06-08 21:08:37.826757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.771 [2024-06-08 21:08:37.826767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.771 [2024-06-08 21:08:37.826775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.771 [2024-06-08 21:08:37.826920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.771 [2024-06-08 21:08:37.827040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.771 [2024-06-08 21:08:37.827198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.771 [2024-06-08 21:08:37.827199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.713 21:08:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:00.713 21:08:38 -- common/autotest_common.sh@852 -- # return 0 00:13:00.713 21:08:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.713 21:08:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:00.713 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 21:08:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.713 21:08:38 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:00.713 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.713 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.713 21:08:38 -- target/rpc.sh@26 -- # stats='{ 00:13:00.713 "tick_rate": 2400000000, 00:13:00.713 "poll_groups": [ 00:13:00.713 { 00:13:00.713 "name": "nvmf_tgt_poll_group_0", 00:13:00.713 "admin_qpairs": 0, 00:13:00.713 "io_qpairs": 0, 00:13:00.713 "current_admin_qpairs": 0, 00:13:00.713 "current_io_qpairs": 0, 00:13:00.713 "pending_bdev_io": 0, 00:13:00.713 "completed_nvme_io": 0, 00:13:00.713 "transports": [] 00:13:00.713 }, 00:13:00.713 { 00:13:00.713 "name": "nvmf_tgt_poll_group_1", 00:13:00.713 "admin_qpairs": 0, 00:13:00.713 "io_qpairs": 0, 00:13:00.713 "current_admin_qpairs": 0, 00:13:00.713 "current_io_qpairs": 0, 00:13:00.713 "pending_bdev_io": 0, 00:13:00.713 "completed_nvme_io": 0, 00:13:00.713 "transports": [] 00:13:00.713 }, 00:13:00.713 { 00:13:00.713 "name": "nvmf_tgt_poll_group_2", 00:13:00.713 "admin_qpairs": 0, 00:13:00.713 "io_qpairs": 0, 00:13:00.713 "current_admin_qpairs": 0, 00:13:00.713 "current_io_qpairs": 0, 00:13:00.713 "pending_bdev_io": 0, 00:13:00.713 "completed_nvme_io": 0, 00:13:00.713 "transports": [] 00:13:00.713 }, 00:13:00.713 { 00:13:00.713 "name": "nvmf_tgt_poll_group_3", 00:13:00.713 "admin_qpairs": 0, 00:13:00.713 "io_qpairs": 0, 00:13:00.713 "current_admin_qpairs": 0, 00:13:00.713 "current_io_qpairs": 0, 00:13:00.713 "pending_bdev_io": 0, 00:13:00.713 "completed_nvme_io": 0, 00:13:00.713 "transports": [] 00:13:00.713 } 00:13:00.713 ] 00:13:00.713 }' 00:13:00.713 21:08:38 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:00.713 21:08:38 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:00.713 21:08:38 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:00.713 21:08:38 -- target/rpc.sh@15 -- # wc -l 00:13:00.713 21:08:38 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:00.713 21:08:38 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:00.713 21:08:38 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:00.713 21:08:38 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.713 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.713 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 [2024-06-08 21:08:38.604912] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.713 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.713 21:08:38 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:00.713 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.713 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.713 21:08:38 -- target/rpc.sh@33 -- # stats='{ 00:13:00.714 "tick_rate": 2400000000, 00:13:00.714 "poll_groups": [ 00:13:00.714 { 00:13:00.714 "name": "nvmf_tgt_poll_group_0", 00:13:00.714 "admin_qpairs": 0, 00:13:00.714 "io_qpairs": 0, 00:13:00.714 "current_admin_qpairs": 0, 00:13:00.714 "current_io_qpairs": 0, 00:13:00.714 "pending_bdev_io": 0, 00:13:00.714 "completed_nvme_io": 0, 00:13:00.714 "transports": [ 00:13:00.714 { 00:13:00.714 "trtype": "TCP" 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 }, 00:13:00.714 { 00:13:00.714 "name": "nvmf_tgt_poll_group_1", 00:13:00.714 "admin_qpairs": 0, 00:13:00.714 "io_qpairs": 0, 00:13:00.714 "current_admin_qpairs": 0, 00:13:00.714 "current_io_qpairs": 0, 00:13:00.714 "pending_bdev_io": 0, 00:13:00.714 "completed_nvme_io": 0, 00:13:00.714 "transports": [ 00:13:00.714 { 00:13:00.714 "trtype": "TCP" 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 }, 00:13:00.714 { 00:13:00.714 "name": "nvmf_tgt_poll_group_2", 00:13:00.714 "admin_qpairs": 0, 00:13:00.714 "io_qpairs": 0, 00:13:00.714 "current_admin_qpairs": 0, 00:13:00.714 "current_io_qpairs": 0, 00:13:00.714 "pending_bdev_io": 0, 00:13:00.714 "completed_nvme_io": 0, 00:13:00.714 "transports": [ 00:13:00.714 { 00:13:00.714 "trtype": "TCP" 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 }, 00:13:00.714 { 00:13:00.714 "name": "nvmf_tgt_poll_group_3", 00:13:00.714 "admin_qpairs": 0, 00:13:00.714 "io_qpairs": 0, 00:13:00.714 "current_admin_qpairs": 0, 00:13:00.714 "current_io_qpairs": 0, 00:13:00.714 "pending_bdev_io": 0, 00:13:00.714 "completed_nvme_io": 0, 00:13:00.714 "transports": [ 00:13:00.714 { 00:13:00.714 "trtype": "TCP" 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 } 00:13:00.714 ] 00:13:00.714 }' 00:13:00.714 21:08:38 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.714 21:08:38 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:00.714 21:08:38 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.714 21:08:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.714 21:08:38 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:00.714 21:08:38 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:00.714 21:08:38 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:00.714 21:08:38 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:00.714 21:08:38 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.714 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.714 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 Malloc1 00:13:00.714 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.714 21:08:38 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.714 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.714 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.714 21:08:38 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.714 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.714 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.714 21:08:38 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:00.714 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.714 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.714 21:08:38 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.714 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.714 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 [2024-06-08 21:08:38.784743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.714 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.714 21:08:38 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:00.714 21:08:38 -- common/autotest_common.sh@640 -- # local es=0 00:13:00.714 21:08:38 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:00.714 21:08:38 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:00.714 21:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:00.714 21:08:38 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:00.714 21:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:00.714 21:08:38 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:00.714 21:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:00.714 21:08:38 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:00.714 21:08:38 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.714 21:08:38 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:00.975 [2024-06-08 21:08:38.819711] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:00.976 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:00.976 could not add new controller: failed to write to nvme-fabrics device 00:13:00.976 21:08:38 -- common/autotest_common.sh@643 -- # es=1 00:13:00.976 21:08:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:00.976 21:08:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:00.976 21:08:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:00.976 21:08:38 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:00.976 21:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.976 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:00.976 21:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.976 21:08:38 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.362 21:08:40 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.362 21:08:40 -- common/autotest_common.sh@1177 -- # local i=0 00:13:02.362 21:08:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.362 21:08:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:02.362 21:08:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:04.277 21:08:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:04.277 21:08:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:04.277 21:08:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.538 21:08:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:04.538 21:08:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.538 21:08:42 -- common/autotest_common.sh@1187 -- # return 0 00:13:04.538 21:08:42 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.538 21:08:42 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.538 21:08:42 -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.538 21:08:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:04.538 21:08:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.538 21:08:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:04.538 21:08:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.538 21:08:42 -- common/autotest_common.sh@1210 -- # return 0 00:13:04.538 21:08:42 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:04.538 21:08:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.538 21:08:42 -- common/autotest_common.sh@10 -- # set +x 00:13:04.538 21:08:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.539 21:08:42 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.539 21:08:42 -- common/autotest_common.sh@640 -- # local es=0 00:13:04.539 21:08:42 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.539 21:08:42 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:04.539 21:08:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.539 21:08:42 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:04.539 21:08:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.539 21:08:42 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:04.539 21:08:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.539 21:08:42 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:04.539 21:08:42 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:04.539 21:08:42 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.539 [2024-06-08 21:08:42.557363] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:04.539 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:04.539 could not add new controller: failed to write to nvme-fabrics device 00:13:04.539 21:08:42 -- common/autotest_common.sh@643 -- # es=1 00:13:04.539 21:08:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:04.539 21:08:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:04.539 21:08:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:04.539 21:08:42 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:04.539 21:08:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.539 21:08:42 -- common/autotest_common.sh@10 -- # set +x 00:13:04.539 21:08:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.539 21:08:42 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.453 21:08:44 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.453 21:08:44 -- common/autotest_common.sh@1177 -- # local i=0 00:13:06.453 21:08:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.453 21:08:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:06.453 21:08:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.367 21:08:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.367 21:08:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.367 21:08:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.367 21:08:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:08.367 21:08:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.367 21:08:46 -- common/autotest_common.sh@1187 -- # return 0 00:13:08.367 21:08:46 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.367 21:08:46 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.367 21:08:46 -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.367 21:08:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:08.367 21:08:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.367 21:08:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.367 21:08:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:08.367 21:08:46 -- common/autotest_common.sh@1210 -- # return 0 00:13:08.367 21:08:46 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.367 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.367 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.367 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.367 21:08:46 -- target/rpc.sh@81 -- # seq 1 5 00:13:08.367 21:08:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.368 21:08:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.368 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.368 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.368 21:08:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.368 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.368 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 [2024-06-08 21:08:46.226276] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.368 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.368 21:08:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.368 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.368 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.368 21:08:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.368 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.368 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.368 21:08:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.753 21:08:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.753 21:08:47 -- common/autotest_common.sh@1177 -- # local i=0 00:13:09.753 21:08:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.753 21:08:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:09.753 21:08:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:12.298 21:08:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:12.298 21:08:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:12.298 21:08:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.298 21:08:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:12.298 21:08:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.298 21:08:49 -- common/autotest_common.sh@1187 -- # return 0 00:13:12.298 21:08:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.298 21:08:49 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.298 21:08:49 -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.298 21:08:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:12.298 21:08:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.298 21:08:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:12.298 21:08:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.298 21:08:49 -- common/autotest_common.sh@1210 -- # return 0 00:13:12.298 21:08:49 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.298 21:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 21:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:49 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.298 21:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 21:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.298 21:08:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.298 21:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 21:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.298 21:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 [2024-06-08 21:08:49.988094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.298 21:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.298 21:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 21:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.298 21:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.298 21:08:50 -- common/autotest_common.sh@10 -- # set +x 00:13:12.298 21:08:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.298 21:08:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.682 21:08:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.682 21:08:51 -- common/autotest_common.sh@1177 -- # local i=0 00:13:13.682 21:08:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.682 21:08:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:13.682 21:08:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:15.594 21:08:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:15.594 21:08:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:15.594 21:08:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.594 21:08:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:15.594 21:08:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.594 21:08:53 -- common/autotest_common.sh@1187 -- # return 0 00:13:15.594 21:08:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.594 21:08:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.594 21:08:53 -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.594 21:08:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:15.594 21:08:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.594 21:08:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:15.594 21:08:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.594 21:08:53 -- common/autotest_common.sh@1210 -- # return 0 00:13:15.594 21:08:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.595 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.595 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.595 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.595 21:08:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.595 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.595 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.855 21:08:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.855 21:08:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.855 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.855 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.855 21:08:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.855 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.855 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 [2024-06-08 21:08:53.712932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.855 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.855 21:08:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.855 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.855 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.855 21:08:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.855 21:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.855 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 21:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.855 21:08:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.239 21:08:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.239 21:08:55 -- common/autotest_common.sh@1177 -- # local i=0 00:13:17.239 21:08:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.239 21:08:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:17.239 21:08:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:19.192 21:08:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:19.192 21:08:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:19.192 21:08:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.192 21:08:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:19.193 21:08:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.193 21:08:57 -- common/autotest_common.sh@1187 -- # return 0 00:13:19.193 21:08:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.453 21:08:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.453 21:08:57 -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.453 21:08:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:19.453 21:08:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.453 21:08:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:19.453 21:08:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.453 21:08:57 -- common/autotest_common.sh@1210 -- # return 0 00:13:19.453 21:08:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.453 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.453 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.453 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.453 21:08:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.453 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.453 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.453 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.453 21:08:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.453 21:08:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.453 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.453 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.453 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.453 21:08:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.454 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.454 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 [2024-06-08 21:08:57.446091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.454 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.454 21:08:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.454 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.454 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.454 21:08:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.454 21:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.454 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 21:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.454 21:08:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.367 21:08:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.367 21:08:59 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.367 21:08:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.367 21:08:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.367 21:08:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:23.284 21:09:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:23.284 21:09:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:23.284 21:09:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.284 21:09:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:23.284 21:09:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.284 21:09:01 -- common/autotest_common.sh@1187 -- # return 0 00:13:23.284 21:09:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.284 21:09:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.284 21:09:01 -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.284 21:09:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:23.284 21:09:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.284 21:09:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:23.284 21:09:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.284 21:09:01 -- common/autotest_common.sh@1210 -- # return 0 00:13:23.284 21:09:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.284 21:09:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 [2024-06-08 21:09:01.197189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.284 21:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.284 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 21:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.284 21:09:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.671 21:09:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.671 21:09:02 -- common/autotest_common.sh@1177 -- # local i=0 00:13:24.671 21:09:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.671 21:09:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:24.671 21:09:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:27.218 21:09:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:27.218 21:09:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:27.218 21:09:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:27.218 21:09:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.218 21:09:04 -- common/autotest_common.sh@1187 -- # return 0 00:13:27.218 21:09:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.218 21:09:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.218 21:09:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.218 21:09:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.218 21:09:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.218 21:09:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@99 -- # seq 1 5 00:13:27.218 21:09:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.218 21:09:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 [2024-06-08 21:09:04.914781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.218 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.218 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.218 21:09:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.218 21:09:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.218 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.219 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 [2024-06-08 21:09:04.970894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.219 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.219 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.219 21:09:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.219 21:09:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 [2024-06-08 21:09:05.031074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.219 21:09:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 [2024-06-08 21:09:05.087253] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:27.219 21:09:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 [2024-06-08 21:09:05.143457] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:27.219 21:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.219 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.219 21:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.219 21:09:05 -- target/rpc.sh@110 -- # stats='{ 00:13:27.219 "tick_rate": 2400000000, 00:13:27.219 "poll_groups": [ 00:13:27.219 { 00:13:27.219 "name": "nvmf_tgt_poll_group_0", 00:13:27.219 "admin_qpairs": 0, 00:13:27.219 "io_qpairs": 224, 00:13:27.219 "current_admin_qpairs": 0, 00:13:27.219 "current_io_qpairs": 0, 00:13:27.219 "pending_bdev_io": 0, 00:13:27.219 "completed_nvme_io": 521, 00:13:27.219 "transports": [ 00:13:27.219 { 00:13:27.219 "trtype": "TCP" 00:13:27.219 } 00:13:27.219 ] 00:13:27.219 }, 00:13:27.219 { 00:13:27.219 "name": "nvmf_tgt_poll_group_1", 00:13:27.219 "admin_qpairs": 1, 00:13:27.219 "io_qpairs": 223, 00:13:27.219 "current_admin_qpairs": 0, 00:13:27.219 "current_io_qpairs": 0, 00:13:27.219 "pending_bdev_io": 0, 00:13:27.219 "completed_nvme_io": 224, 00:13:27.219 "transports": [ 00:13:27.219 { 00:13:27.219 "trtype": "TCP" 00:13:27.219 } 00:13:27.219 ] 00:13:27.219 }, 00:13:27.219 { 00:13:27.219 "name": "nvmf_tgt_poll_group_2", 00:13:27.219 "admin_qpairs": 6, 00:13:27.219 "io_qpairs": 218, 00:13:27.219 "current_admin_qpairs": 0, 00:13:27.219 "current_io_qpairs": 0, 00:13:27.219 "pending_bdev_io": 0, 00:13:27.219 "completed_nvme_io": 219, 00:13:27.219 "transports": [ 00:13:27.219 { 00:13:27.219 "trtype": "TCP" 00:13:27.219 } 00:13:27.219 ] 00:13:27.219 }, 00:13:27.219 { 00:13:27.220 "name": "nvmf_tgt_poll_group_3", 00:13:27.220 "admin_qpairs": 0, 00:13:27.220 "io_qpairs": 224, 00:13:27.220 "current_admin_qpairs": 0, 00:13:27.220 "current_io_qpairs": 0, 00:13:27.220 "pending_bdev_io": 0, 00:13:27.220 "completed_nvme_io": 275, 00:13:27.220 "transports": [ 00:13:27.220 { 00:13:27.220 "trtype": "TCP" 00:13:27.220 } 00:13:27.220 ] 00:13:27.220 } 00:13:27.220 ] 00:13:27.220 }' 00:13:27.220 21:09:05 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.220 21:09:05 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:27.220 21:09:05 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:27.220 21:09:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.220 21:09:05 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:27.220 21:09:05 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:27.220 21:09:05 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:27.220 21:09:05 -- target/rpc.sh@123 -- # nvmftestfini 00:13:27.220 21:09:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.220 21:09:05 -- nvmf/common.sh@116 -- # sync 00:13:27.220 21:09:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.220 21:09:05 -- nvmf/common.sh@119 -- # set +e 00:13:27.220 21:09:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.220 21:09:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.481 rmmod nvme_tcp 00:13:27.481 rmmod nvme_fabrics 00:13:27.481 rmmod nvme_keyring 00:13:27.481 21:09:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.481 21:09:05 -- nvmf/common.sh@123 -- # set -e 00:13:27.481 21:09:05 -- nvmf/common.sh@124 -- # return 0 00:13:27.481 21:09:05 -- nvmf/common.sh@477 -- # '[' -n 2270795 ']' 00:13:27.481 21:09:05 -- nvmf/common.sh@478 -- # killprocess 2270795 00:13:27.481 21:09:05 -- common/autotest_common.sh@926 -- # '[' -z 2270795 ']' 00:13:27.481 21:09:05 -- common/autotest_common.sh@930 -- # kill -0 2270795 00:13:27.481 21:09:05 -- common/autotest_common.sh@931 -- # uname 00:13:27.481 21:09:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:27.481 21:09:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2270795 00:13:27.481 21:09:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:27.481 21:09:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:27.481 21:09:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2270795' 00:13:27.481 killing process with pid 2270795 00:13:27.481 21:09:05 -- common/autotest_common.sh@945 -- # kill 2270795 00:13:27.481 21:09:05 -- common/autotest_common.sh@950 -- # wait 2270795 00:13:27.481 21:09:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.481 21:09:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.481 21:09:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.481 21:09:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.481 21:09:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.481 21:09:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.481 21:09:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.481 21:09:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.033 21:09:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:30.033 00:13:30.033 real 0m37.160s 00:13:30.033 user 1m52.816s 00:13:30.033 sys 0m7.072s 00:13:30.033 21:09:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.033 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.033 ************************************ 00:13:30.033 END TEST nvmf_rpc 00:13:30.033 ************************************ 00:13:30.033 21:09:07 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:30.033 21:09:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:30.033 21:09:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.033 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.033 ************************************ 00:13:30.033 START TEST nvmf_invalid 00:13:30.033 ************************************ 00:13:30.033 21:09:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:30.033 * Looking for test storage... 00:13:30.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.033 21:09:07 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.033 21:09:07 -- nvmf/common.sh@7 -- # uname -s 00:13:30.033 21:09:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.033 21:09:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.033 21:09:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.033 21:09:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.033 21:09:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.033 21:09:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.033 21:09:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.033 21:09:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.033 21:09:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.033 21:09:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.033 21:09:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.033 21:09:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:30.033 21:09:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.033 21:09:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.033 21:09:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.033 21:09:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.033 21:09:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.033 21:09:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.033 21:09:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.033 21:09:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.033 21:09:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.033 21:09:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.033 21:09:07 -- paths/export.sh@5 -- # export PATH 00:13:30.033 21:09:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.033 21:09:07 -- nvmf/common.sh@46 -- # : 0 00:13:30.033 21:09:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:30.033 21:09:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:30.033 21:09:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:30.033 21:09:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.033 21:09:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.033 21:09:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:30.033 21:09:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:30.033 21:09:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:30.033 21:09:07 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:30.033 21:09:07 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.033 21:09:07 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:30.033 21:09:07 -- target/invalid.sh@14 -- # target=foobar 00:13:30.033 21:09:07 -- target/invalid.sh@16 -- # RANDOM=0 00:13:30.033 21:09:07 -- target/invalid.sh@34 -- # nvmftestinit 00:13:30.033 21:09:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:30.033 21:09:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.033 21:09:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:30.033 21:09:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:30.033 21:09:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:30.033 21:09:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.033 21:09:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.033 21:09:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.033 21:09:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:30.033 21:09:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:30.033 21:09:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:30.033 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:13:36.625 21:09:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:36.625 21:09:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:36.625 21:09:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:36.625 21:09:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:36.625 21:09:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:36.625 21:09:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:36.625 21:09:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:36.625 21:09:14 -- nvmf/common.sh@294 -- # net_devs=() 00:13:36.625 21:09:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:36.625 21:09:14 -- nvmf/common.sh@295 -- # e810=() 00:13:36.625 21:09:14 -- nvmf/common.sh@295 -- # local -ga e810 00:13:36.625 21:09:14 -- nvmf/common.sh@296 -- # x722=() 00:13:36.625 21:09:14 -- nvmf/common.sh@296 -- # local -ga x722 00:13:36.625 21:09:14 -- nvmf/common.sh@297 -- # mlx=() 00:13:36.625 21:09:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:36.625 21:09:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.625 21:09:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:36.625 21:09:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:36.625 21:09:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:36.625 21:09:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.625 21:09:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:36.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:36.625 21:09:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.625 21:09:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:36.625 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:36.625 21:09:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:36.625 21:09:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.625 21:09:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.625 21:09:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.625 21:09:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.625 21:09:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:36.625 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:36.625 21:09:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.625 21:09:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.625 21:09:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.625 21:09:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.625 21:09:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.625 21:09:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:36.625 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:36.625 21:09:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.625 21:09:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:36.625 21:09:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:36.625 21:09:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:36.625 21:09:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:36.625 21:09:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.625 21:09:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.626 21:09:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.626 21:09:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:36.626 21:09:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.626 21:09:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.626 21:09:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:36.626 21:09:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.626 21:09:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.626 21:09:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:36.626 21:09:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:36.626 21:09:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.626 21:09:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.626 21:09:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.626 21:09:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.626 21:09:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:36.626 21:09:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.626 21:09:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.626 21:09:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.626 21:09:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:36.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:13:36.626 00:13:36.626 --- 10.0.0.2 ping statistics --- 00:13:36.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.626 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:13:36.626 21:09:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:13:36.626 00:13:36.626 --- 10.0.0.1 ping statistics --- 00:13:36.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.626 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:13:36.626 21:09:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.626 21:09:14 -- nvmf/common.sh@410 -- # return 0 00:13:36.626 21:09:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.626 21:09:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.626 21:09:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.626 21:09:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.626 21:09:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.626 21:09:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.626 21:09:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.626 21:09:14 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:36.626 21:09:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.626 21:09:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:36.626 21:09:14 -- common/autotest_common.sh@10 -- # set +x 00:13:36.626 21:09:14 -- nvmf/common.sh@469 -- # nvmfpid=2281169 00:13:36.626 21:09:14 -- nvmf/common.sh@470 -- # waitforlisten 2281169 00:13:36.626 21:09:14 -- common/autotest_common.sh@819 -- # '[' -z 2281169 ']' 00:13:36.626 21:09:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.626 21:09:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:36.626 21:09:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.626 21:09:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:36.626 21:09:14 -- common/autotest_common.sh@10 -- # set +x 00:13:36.626 21:09:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.626 [2024-06-08 21:09:14.695251] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:36.626 [2024-06-08 21:09:14.695299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.887 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.887 [2024-06-08 21:09:14.760201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.887 [2024-06-08 21:09:14.825458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.887 [2024-06-08 21:09:14.825584] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.887 [2024-06-08 21:09:14.825594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.887 [2024-06-08 21:09:14.825602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.887 [2024-06-08 21:09:14.825741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.887 [2024-06-08 21:09:14.825839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.887 [2024-06-08 21:09:14.825998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.887 [2024-06-08 21:09:14.825999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.458 21:09:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.458 21:09:15 -- common/autotest_common.sh@852 -- # return 0 00:13:37.458 21:09:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:37.458 21:09:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:37.458 21:09:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.458 21:09:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.458 21:09:15 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:37.458 21:09:15 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1179 00:13:37.719 [2024-06-08 21:09:15.641987] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:37.719 21:09:15 -- target/invalid.sh@40 -- # out='request: 00:13:37.719 { 00:13:37.719 "nqn": "nqn.2016-06.io.spdk:cnode1179", 00:13:37.719 "tgt_name": "foobar", 00:13:37.719 "method": "nvmf_create_subsystem", 00:13:37.719 "req_id": 1 00:13:37.719 } 00:13:37.719 Got JSON-RPC error response 00:13:37.719 response: 00:13:37.719 { 00:13:37.719 "code": -32603, 00:13:37.719 "message": "Unable to find target foobar" 00:13:37.719 }' 00:13:37.719 21:09:15 -- target/invalid.sh@41 -- # [[ request: 00:13:37.719 { 00:13:37.719 "nqn": "nqn.2016-06.io.spdk:cnode1179", 00:13:37.719 "tgt_name": "foobar", 00:13:37.719 "method": "nvmf_create_subsystem", 00:13:37.719 "req_id": 1 00:13:37.719 } 00:13:37.719 Got JSON-RPC error response 00:13:37.719 response: 00:13:37.719 { 00:13:37.719 "code": -32603, 00:13:37.719 "message": "Unable to find target foobar" 00:13:37.719 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:37.719 21:09:15 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:37.719 21:09:15 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11489 00:13:37.980 [2024-06-08 21:09:15.814572] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11489: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:37.980 21:09:15 -- target/invalid.sh@45 -- # out='request: 00:13:37.980 { 00:13:37.980 "nqn": "nqn.2016-06.io.spdk:cnode11489", 00:13:37.980 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:37.980 "method": "nvmf_create_subsystem", 00:13:37.980 "req_id": 1 00:13:37.980 } 00:13:37.980 Got JSON-RPC error response 00:13:37.980 response: 00:13:37.980 { 00:13:37.980 "code": -32602, 00:13:37.980 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:37.980 }' 00:13:37.980 21:09:15 -- target/invalid.sh@46 -- # [[ request: 00:13:37.980 { 00:13:37.980 "nqn": "nqn.2016-06.io.spdk:cnode11489", 00:13:37.980 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:37.980 "method": "nvmf_create_subsystem", 00:13:37.980 "req_id": 1 00:13:37.980 } 00:13:37.980 Got JSON-RPC error response 00:13:37.980 response: 00:13:37.980 { 00:13:37.980 "code": -32602, 00:13:37.980 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:37.980 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:37.980 21:09:15 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:37.980 21:09:15 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2451 00:13:37.980 [2024-06-08 21:09:15.987154] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2451: invalid model number 'SPDK_Controller' 00:13:37.980 21:09:16 -- target/invalid.sh@50 -- # out='request: 00:13:37.980 { 00:13:37.980 "nqn": "nqn.2016-06.io.spdk:cnode2451", 00:13:37.980 "model_number": "SPDK_Controller\u001f", 00:13:37.980 "method": "nvmf_create_subsystem", 00:13:37.980 "req_id": 1 00:13:37.980 } 00:13:37.980 Got JSON-RPC error response 00:13:37.980 response: 00:13:37.980 { 00:13:37.980 "code": -32602, 00:13:37.980 "message": "Invalid MN SPDK_Controller\u001f" 00:13:37.980 }' 00:13:37.980 21:09:16 -- target/invalid.sh@51 -- # [[ request: 00:13:37.980 { 00:13:37.980 "nqn": "nqn.2016-06.io.spdk:cnode2451", 00:13:37.980 "model_number": "SPDK_Controller\u001f", 00:13:37.980 "method": "nvmf_create_subsystem", 00:13:37.980 "req_id": 1 00:13:37.980 } 00:13:37.980 Got JSON-RPC error response 00:13:37.980 response: 00:13:37.980 { 00:13:37.980 "code": -32602, 00:13:37.980 "message": "Invalid MN SPDK_Controller\u001f" 00:13:37.980 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:37.980 21:09:16 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:37.980 21:09:16 -- target/invalid.sh@19 -- # local length=21 ll 00:13:37.980 21:09:16 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:37.980 21:09:16 -- target/invalid.sh@21 -- # local chars 00:13:37.980 21:09:16 -- target/invalid.sh@22 -- # local string 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 113 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=q 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 79 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=O 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 68 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=D 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 119 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=w 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 43 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=+ 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 51 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # string+=3 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.980 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.980 21:09:16 -- target/invalid.sh@25 -- # printf %x 60 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+='<' 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 99 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=c 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 64 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=@ 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 96 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+='`' 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 55 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=7 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 101 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=e 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 42 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+='*' 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 106 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=j 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 116 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=t 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 103 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=g 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 55 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=7 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 120 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=x 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 52 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=4 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # printf %x 117 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:38.241 21:09:16 -- target/invalid.sh@25 -- # string+=u 00:13:38.241 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.242 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.242 21:09:16 -- target/invalid.sh@25 -- # printf %x 67 00:13:38.242 21:09:16 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:38.242 21:09:16 -- target/invalid.sh@25 -- # string+=C 00:13:38.242 21:09:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:38.242 21:09:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:38.242 21:09:16 -- target/invalid.sh@28 -- # [[ q == \- ]] 00:13:38.242 21:09:16 -- target/invalid.sh@31 -- # echo 'qODw+3T@I2+*@GQ"}G[ek;l8.&Q' 00:13:38.764 21:09:16 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%0!D~t$OUL@MNE*O} e>T@I2+*@GQ"}G[ek;l8.&Q' nqn.2016-06.io.spdk:cnode12588 00:13:38.764 [2024-06-08 21:09:16.785745] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12588: invalid model number '%0!D~t$OUL@MNE*O} e>T@I2+*@GQ"}G[ek;l8.&Q' 00:13:38.764 21:09:16 -- target/invalid.sh@58 -- # out='request: 00:13:38.764 { 00:13:38.764 "nqn": "nqn.2016-06.io.spdk:cnode12588", 00:13:38.764 "model_number": "%0!D~t$OUL@MNE*O} e>T@I2+*@GQ\"}G[ek;l8.&Q", 00:13:38.764 "method": "nvmf_create_subsystem", 00:13:38.764 "req_id": 1 00:13:38.764 } 00:13:38.764 Got JSON-RPC error response 00:13:38.764 response: 00:13:38.764 { 00:13:38.764 "code": -32602, 00:13:38.764 "message": "Invalid MN %0!D~t$OUL@MNE*O} e>T@I2+*@GQ\"}G[ek;l8.&Q" 00:13:38.764 }' 00:13:38.764 21:09:16 -- target/invalid.sh@59 -- # [[ request: 00:13:38.764 { 00:13:38.764 "nqn": "nqn.2016-06.io.spdk:cnode12588", 00:13:38.764 "model_number": "%0!D~t$OUL@MNE*O} e>T@I2+*@GQ\"}G[ek;l8.&Q", 00:13:38.764 "method": "nvmf_create_subsystem", 00:13:38.764 "req_id": 1 00:13:38.764 } 00:13:38.764 Got JSON-RPC error response 00:13:38.764 response: 00:13:38.764 { 00:13:38.764 "code": -32602, 00:13:38.764 "message": "Invalid MN %0!D~t$OUL@MNE*O} e>T@I2+*@GQ\"}G[ek;l8.&Q" 00:13:38.764 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:38.764 21:09:16 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:39.025 [2024-06-08 21:09:16.954358] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.025 21:09:16 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:39.286 21:09:17 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:39.286 21:09:17 -- target/invalid.sh@67 -- # echo '' 00:13:39.286 21:09:17 -- target/invalid.sh@67 -- # head -n 1 00:13:39.286 21:09:17 -- target/invalid.sh@67 -- # IP= 00:13:39.286 21:09:17 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:39.286 [2024-06-08 21:09:17.299518] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:39.286 21:09:17 -- target/invalid.sh@69 -- # out='request: 00:13:39.286 { 00:13:39.286 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:39.286 "listen_address": { 00:13:39.286 "trtype": "tcp", 00:13:39.286 "traddr": "", 00:13:39.286 "trsvcid": "4421" 00:13:39.286 }, 00:13:39.286 "method": "nvmf_subsystem_remove_listener", 00:13:39.286 "req_id": 1 00:13:39.286 } 00:13:39.286 Got JSON-RPC error response 00:13:39.286 response: 00:13:39.286 { 00:13:39.286 "code": -32602, 00:13:39.286 "message": "Invalid parameters" 00:13:39.286 }' 00:13:39.286 21:09:17 -- target/invalid.sh@70 -- # [[ request: 00:13:39.286 { 00:13:39.286 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:39.286 "listen_address": { 00:13:39.286 "trtype": "tcp", 00:13:39.286 "traddr": "", 00:13:39.286 "trsvcid": "4421" 00:13:39.286 }, 00:13:39.286 "method": "nvmf_subsystem_remove_listener", 00:13:39.286 "req_id": 1 00:13:39.286 } 00:13:39.286 Got JSON-RPC error response 00:13:39.286 response: 00:13:39.286 { 00:13:39.286 "code": -32602, 00:13:39.286 "message": "Invalid parameters" 00:13:39.286 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:39.286 21:09:17 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13659 -i 0 00:13:39.547 [2024-06-08 21:09:17.468071] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13659: invalid cntlid range [0-65519] 00:13:39.547 21:09:17 -- target/invalid.sh@73 -- # out='request: 00:13:39.547 { 00:13:39.547 "nqn": "nqn.2016-06.io.spdk:cnode13659", 00:13:39.547 "min_cntlid": 0, 00:13:39.547 "method": "nvmf_create_subsystem", 00:13:39.547 "req_id": 1 00:13:39.547 } 00:13:39.547 Got JSON-RPC error response 00:13:39.547 response: 00:13:39.547 { 00:13:39.547 "code": -32602, 00:13:39.547 "message": "Invalid cntlid range [0-65519]" 00:13:39.547 }' 00:13:39.547 21:09:17 -- target/invalid.sh@74 -- # [[ request: 00:13:39.547 { 00:13:39.547 "nqn": "nqn.2016-06.io.spdk:cnode13659", 00:13:39.547 "min_cntlid": 0, 00:13:39.547 "method": "nvmf_create_subsystem", 00:13:39.547 "req_id": 1 00:13:39.547 } 00:13:39.547 Got JSON-RPC error response 00:13:39.547 response: 00:13:39.547 { 00:13:39.547 "code": -32602, 00:13:39.547 "message": "Invalid cntlid range [0-65519]" 00:13:39.547 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:39.547 21:09:17 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11299 -i 65520 00:13:39.547 [2024-06-08 21:09:17.636652] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11299: invalid cntlid range [65520-65519] 00:13:39.807 21:09:17 -- target/invalid.sh@75 -- # out='request: 00:13:39.807 { 00:13:39.807 "nqn": "nqn.2016-06.io.spdk:cnode11299", 00:13:39.808 "min_cntlid": 65520, 00:13:39.808 "method": "nvmf_create_subsystem", 00:13:39.808 "req_id": 1 00:13:39.808 } 00:13:39.808 Got JSON-RPC error response 00:13:39.808 response: 00:13:39.808 { 00:13:39.808 "code": -32602, 00:13:39.808 "message": "Invalid cntlid range [65520-65519]" 00:13:39.808 }' 00:13:39.808 21:09:17 -- target/invalid.sh@76 -- # [[ request: 00:13:39.808 { 00:13:39.808 "nqn": "nqn.2016-06.io.spdk:cnode11299", 00:13:39.808 "min_cntlid": 65520, 00:13:39.808 "method": "nvmf_create_subsystem", 00:13:39.808 "req_id": 1 00:13:39.808 } 00:13:39.808 Got JSON-RPC error response 00:13:39.808 response: 00:13:39.808 { 00:13:39.808 "code": -32602, 00:13:39.808 "message": "Invalid cntlid range [65520-65519]" 00:13:39.808 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:39.808 21:09:17 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6001 -I 0 00:13:39.808 [2024-06-08 21:09:17.805237] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6001: invalid cntlid range [1-0] 00:13:39.808 21:09:17 -- target/invalid.sh@77 -- # out='request: 00:13:39.808 { 00:13:39.808 "nqn": "nqn.2016-06.io.spdk:cnode6001", 00:13:39.808 "max_cntlid": 0, 00:13:39.808 "method": "nvmf_create_subsystem", 00:13:39.808 "req_id": 1 00:13:39.808 } 00:13:39.808 Got JSON-RPC error response 00:13:39.808 response: 00:13:39.808 { 00:13:39.808 "code": -32602, 00:13:39.808 "message": "Invalid cntlid range [1-0]" 00:13:39.808 }' 00:13:39.808 21:09:17 -- target/invalid.sh@78 -- # [[ request: 00:13:39.808 { 00:13:39.808 "nqn": "nqn.2016-06.io.spdk:cnode6001", 00:13:39.808 "max_cntlid": 0, 00:13:39.808 "method": "nvmf_create_subsystem", 00:13:39.808 "req_id": 1 00:13:39.808 } 00:13:39.808 Got JSON-RPC error response 00:13:39.808 response: 00:13:39.808 { 00:13:39.808 "code": -32602, 00:13:39.808 "message": "Invalid cntlid range [1-0]" 00:13:39.808 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:39.808 21:09:17 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19545 -I 65520 00:13:40.068 [2024-06-08 21:09:17.973844] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19545: invalid cntlid range [1-65520] 00:13:40.068 21:09:18 -- target/invalid.sh@79 -- # out='request: 00:13:40.068 { 00:13:40.068 "nqn": "nqn.2016-06.io.spdk:cnode19545", 00:13:40.068 "max_cntlid": 65520, 00:13:40.068 "method": "nvmf_create_subsystem", 00:13:40.068 "req_id": 1 00:13:40.068 } 00:13:40.068 Got JSON-RPC error response 00:13:40.068 response: 00:13:40.068 { 00:13:40.068 "code": -32602, 00:13:40.068 "message": "Invalid cntlid range [1-65520]" 00:13:40.068 }' 00:13:40.068 21:09:18 -- target/invalid.sh@80 -- # [[ request: 00:13:40.068 { 00:13:40.068 "nqn": "nqn.2016-06.io.spdk:cnode19545", 00:13:40.068 "max_cntlid": 65520, 00:13:40.068 "method": "nvmf_create_subsystem", 00:13:40.068 "req_id": 1 00:13:40.068 } 00:13:40.068 Got JSON-RPC error response 00:13:40.068 response: 00:13:40.068 { 00:13:40.068 "code": -32602, 00:13:40.068 "message": "Invalid cntlid range [1-65520]" 00:13:40.068 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:40.068 21:09:18 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31512 -i 6 -I 5 00:13:40.068 [2024-06-08 21:09:18.134331] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31512: invalid cntlid range [6-5] 00:13:40.328 21:09:18 -- target/invalid.sh@83 -- # out='request: 00:13:40.328 { 00:13:40.328 "nqn": "nqn.2016-06.io.spdk:cnode31512", 00:13:40.328 "min_cntlid": 6, 00:13:40.328 "max_cntlid": 5, 00:13:40.328 "method": "nvmf_create_subsystem", 00:13:40.328 "req_id": 1 00:13:40.328 } 00:13:40.328 Got JSON-RPC error response 00:13:40.328 response: 00:13:40.328 { 00:13:40.328 "code": -32602, 00:13:40.328 "message": "Invalid cntlid range [6-5]" 00:13:40.328 }' 00:13:40.328 21:09:18 -- target/invalid.sh@84 -- # [[ request: 00:13:40.328 { 00:13:40.328 "nqn": "nqn.2016-06.io.spdk:cnode31512", 00:13:40.328 "min_cntlid": 6, 00:13:40.328 "max_cntlid": 5, 00:13:40.328 "method": "nvmf_create_subsystem", 00:13:40.328 "req_id": 1 00:13:40.328 } 00:13:40.328 Got JSON-RPC error response 00:13:40.328 response: 00:13:40.328 { 00:13:40.328 "code": -32602, 00:13:40.328 "message": "Invalid cntlid range [6-5]" 00:13:40.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:40.328 21:09:18 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:40.328 21:09:18 -- target/invalid.sh@87 -- # out='request: 00:13:40.328 { 00:13:40.328 "name": "foobar", 00:13:40.328 "method": "nvmf_delete_target", 00:13:40.328 "req_id": 1 00:13:40.328 } 00:13:40.328 Got JSON-RPC error response 00:13:40.328 response: 00:13:40.328 { 00:13:40.328 "code": -32602, 00:13:40.328 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:40.328 }' 00:13:40.328 21:09:18 -- target/invalid.sh@88 -- # [[ request: 00:13:40.328 { 00:13:40.328 "name": "foobar", 00:13:40.328 "method": "nvmf_delete_target", 00:13:40.328 "req_id": 1 00:13:40.328 } 00:13:40.328 Got JSON-RPC error response 00:13:40.328 response: 00:13:40.328 { 00:13:40.328 "code": -32602, 00:13:40.328 "message": "The specified target doesn't exist, cannot delete it." 00:13:40.328 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:40.329 21:09:18 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:40.329 21:09:18 -- target/invalid.sh@91 -- # nvmftestfini 00:13:40.329 21:09:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:40.329 21:09:18 -- nvmf/common.sh@116 -- # sync 00:13:40.329 21:09:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:40.329 21:09:18 -- nvmf/common.sh@119 -- # set +e 00:13:40.329 21:09:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:40.329 21:09:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:40.329 rmmod nvme_tcp 00:13:40.329 rmmod nvme_fabrics 00:13:40.329 rmmod nvme_keyring 00:13:40.329 21:09:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:40.329 21:09:18 -- nvmf/common.sh@123 -- # set -e 00:13:40.329 21:09:18 -- nvmf/common.sh@124 -- # return 0 00:13:40.329 21:09:18 -- nvmf/common.sh@477 -- # '[' -n 2281169 ']' 00:13:40.329 21:09:18 -- nvmf/common.sh@478 -- # killprocess 2281169 00:13:40.329 21:09:18 -- common/autotest_common.sh@926 -- # '[' -z 2281169 ']' 00:13:40.329 21:09:18 -- common/autotest_common.sh@930 -- # kill -0 2281169 00:13:40.329 21:09:18 -- common/autotest_common.sh@931 -- # uname 00:13:40.329 21:09:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:40.329 21:09:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2281169 00:13:40.329 21:09:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:40.329 21:09:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:40.329 21:09:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2281169' 00:13:40.329 killing process with pid 2281169 00:13:40.329 21:09:18 -- common/autotest_common.sh@945 -- # kill 2281169 00:13:40.329 21:09:18 -- common/autotest_common.sh@950 -- # wait 2281169 00:13:40.590 21:09:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:40.590 21:09:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:40.590 21:09:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:40.590 21:09:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.590 21:09:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:40.590 21:09:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.590 21:09:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.590 21:09:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.504 21:09:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:42.769 00:13:42.769 real 0m12.906s 00:13:42.769 user 0m18.751s 00:13:42.769 sys 0m5.992s 00:13:42.769 21:09:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.769 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:13:42.769 ************************************ 00:13:42.769 END TEST nvmf_invalid 00:13:42.769 ************************************ 00:13:42.769 21:09:20 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:42.770 21:09:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:42.770 21:09:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.770 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:13:42.770 ************************************ 00:13:42.770 START TEST nvmf_abort 00:13:42.770 ************************************ 00:13:42.770 21:09:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:42.770 * Looking for test storage... 00:13:42.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.770 21:09:20 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.770 21:09:20 -- nvmf/common.sh@7 -- # uname -s 00:13:42.770 21:09:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.770 21:09:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.770 21:09:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.770 21:09:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.770 21:09:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.770 21:09:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.770 21:09:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.770 21:09:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.770 21:09:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.770 21:09:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.770 21:09:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.770 21:09:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.770 21:09:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.770 21:09:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.770 21:09:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.770 21:09:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.770 21:09:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.770 21:09:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.770 21:09:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.770 21:09:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.770 21:09:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.770 21:09:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.770 21:09:20 -- paths/export.sh@5 -- # export PATH 00:13:42.770 21:09:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.770 21:09:20 -- nvmf/common.sh@46 -- # : 0 00:13:42.770 21:09:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.770 21:09:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.770 21:09:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.770 21:09:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.770 21:09:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.770 21:09:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.770 21:09:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.770 21:09:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.770 21:09:20 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.770 21:09:20 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:42.770 21:09:20 -- target/abort.sh@14 -- # nvmftestinit 00:13:42.770 21:09:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:42.770 21:09:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.770 21:09:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:42.770 21:09:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:42.770 21:09:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:42.770 21:09:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.770 21:09:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.770 21:09:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.770 21:09:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:42.770 21:09:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:42.770 21:09:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:42.770 21:09:20 -- common/autotest_common.sh@10 -- # set +x 00:13:49.427 21:09:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:49.427 21:09:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:49.427 21:09:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:49.427 21:09:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:49.427 21:09:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:49.427 21:09:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:49.427 21:09:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:49.427 21:09:27 -- nvmf/common.sh@294 -- # net_devs=() 00:13:49.427 21:09:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:49.427 21:09:27 -- nvmf/common.sh@295 -- # e810=() 00:13:49.427 21:09:27 -- nvmf/common.sh@295 -- # local -ga e810 00:13:49.427 21:09:27 -- nvmf/common.sh@296 -- # x722=() 00:13:49.427 21:09:27 -- nvmf/common.sh@296 -- # local -ga x722 00:13:49.427 21:09:27 -- nvmf/common.sh@297 -- # mlx=() 00:13:49.428 21:09:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:49.428 21:09:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.428 21:09:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:49.428 21:09:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:49.428 21:09:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:49.428 21:09:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:49.428 21:09:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:49.428 21:09:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:49.689 21:09:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:49.689 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:49.689 21:09:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:49.689 21:09:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:49.689 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:49.689 21:09:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:49.689 21:09:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.689 21:09:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.689 21:09:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:49.689 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:49.689 21:09:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.689 21:09:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:49.689 21:09:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.689 21:09:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.689 21:09:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:49.689 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:49.689 21:09:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.689 21:09:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:49.689 21:09:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:49.689 21:09:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:49.689 21:09:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.689 21:09:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.689 21:09:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.689 21:09:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:49.689 21:09:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.689 21:09:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.689 21:09:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:49.689 21:09:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.689 21:09:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.689 21:09:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:49.689 21:09:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:49.689 21:09:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.689 21:09:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.689 21:09:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.689 21:09:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.689 21:09:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:49.689 21:09:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.950 21:09:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.950 21:09:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.950 21:09:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:49.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:13:49.950 00:13:49.950 --- 10.0.0.2 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:13:49.950 21:09:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:13:49.950 00:13:49.950 --- 10.0.0.1 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:13:49.950 21:09:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.950 21:09:27 -- nvmf/common.sh@410 -- # return 0 00:13:49.950 21:09:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:49.950 21:09:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.950 21:09:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:49.950 21:09:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:49.950 21:09:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.950 21:09:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:49.950 21:09:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:49.950 21:09:27 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:49.950 21:09:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:49.951 21:09:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:49.951 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:49.951 21:09:27 -- nvmf/common.sh@469 -- # nvmfpid=2286165 00:13:49.951 21:09:27 -- nvmf/common.sh@470 -- # waitforlisten 2286165 00:13:49.951 21:09:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:49.951 21:09:27 -- common/autotest_common.sh@819 -- # '[' -z 2286165 ']' 00:13:49.951 21:09:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.951 21:09:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:49.951 21:09:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.951 21:09:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:49.951 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:49.951 [2024-06-08 21:09:27.939183] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:49.951 [2024-06-08 21:09:27.939248] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.951 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.951 [2024-06-08 21:09:28.027498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.211 [2024-06-08 21:09:28.119834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:50.211 [2024-06-08 21:09:28.120007] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.211 [2024-06-08 21:09:28.120018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.211 [2024-06-08 21:09:28.120027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.211 [2024-06-08 21:09:28.120168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.211 [2024-06-08 21:09:28.120336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.211 [2024-06-08 21:09:28.120337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.783 21:09:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.783 21:09:28 -- common/autotest_common.sh@852 -- # return 0 00:13:50.783 21:09:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:50.783 21:09:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 21:09:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.783 21:09:28 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 [2024-06-08 21:09:28.762046] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 Malloc0 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 Delay0 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 [2024-06-08 21:09:28.851788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.783 21:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.783 21:09:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.783 21:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.783 21:09:28 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:51.044 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.044 [2024-06-08 21:09:28.973250] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:53.592 Initializing NVMe Controllers 00:13:53.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:53.592 controller IO queue size 128 less than required 00:13:53.592 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:53.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:53.592 Initialization complete. Launching workers. 00:13:53.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29981 00:13:53.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30042, failed to submit 62 00:13:53.592 success 29981, unsuccess 61, failed 0 00:13:53.592 21:09:31 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:53.592 21:09:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.592 21:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 21:09:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.592 21:09:31 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:53.592 21:09:31 -- target/abort.sh@38 -- # nvmftestfini 00:13:53.592 21:09:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:53.592 21:09:31 -- nvmf/common.sh@116 -- # sync 00:13:53.592 21:09:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:53.592 21:09:31 -- nvmf/common.sh@119 -- # set +e 00:13:53.592 21:09:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:53.592 21:09:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:53.592 rmmod nvme_tcp 00:13:53.592 rmmod nvme_fabrics 00:13:53.592 rmmod nvme_keyring 00:13:53.592 21:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:53.592 21:09:31 -- nvmf/common.sh@123 -- # set -e 00:13:53.592 21:09:31 -- nvmf/common.sh@124 -- # return 0 00:13:53.592 21:09:31 -- nvmf/common.sh@477 -- # '[' -n 2286165 ']' 00:13:53.592 21:09:31 -- nvmf/common.sh@478 -- # killprocess 2286165 00:13:53.592 21:09:31 -- common/autotest_common.sh@926 -- # '[' -z 2286165 ']' 00:13:53.592 21:09:31 -- common/autotest_common.sh@930 -- # kill -0 2286165 00:13:53.592 21:09:31 -- common/autotest_common.sh@931 -- # uname 00:13:53.592 21:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:53.592 21:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2286165 00:13:53.592 21:09:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:53.592 21:09:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:53.592 21:09:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2286165' 00:13:53.592 killing process with pid 2286165 00:13:53.592 21:09:31 -- common/autotest_common.sh@945 -- # kill 2286165 00:13:53.592 21:09:31 -- common/autotest_common.sh@950 -- # wait 2286165 00:13:53.592 21:09:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:53.592 21:09:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:53.592 21:09:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:53.592 21:09:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.592 21:09:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:53.592 21:09:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.592 21:09:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.592 21:09:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.505 21:09:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:55.505 00:13:55.505 real 0m12.838s 00:13:55.505 user 0m13.608s 00:13:55.505 sys 0m6.221s 00:13:55.505 21:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.505 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 ************************************ 00:13:55.505 END TEST nvmf_abort 00:13:55.505 ************************************ 00:13:55.505 21:09:33 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:55.505 21:09:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:55.505 21:09:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.505 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 ************************************ 00:13:55.505 START TEST nvmf_ns_hotplug_stress 00:13:55.505 ************************************ 00:13:55.505 21:09:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:55.766 * Looking for test storage... 00:13:55.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.766 21:09:33 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.766 21:09:33 -- nvmf/common.sh@7 -- # uname -s 00:13:55.766 21:09:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.766 21:09:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.766 21:09:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.766 21:09:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.766 21:09:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.766 21:09:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.766 21:09:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.766 21:09:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.766 21:09:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.766 21:09:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.766 21:09:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.766 21:09:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.766 21:09:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.766 21:09:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.766 21:09:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.766 21:09:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.766 21:09:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.766 21:09:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.766 21:09:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.766 21:09:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.766 21:09:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.766 21:09:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.766 21:09:33 -- paths/export.sh@5 -- # export PATH 00:13:55.766 21:09:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.766 21:09:33 -- nvmf/common.sh@46 -- # : 0 00:13:55.766 21:09:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:55.766 21:09:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:55.766 21:09:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:55.766 21:09:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.766 21:09:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.766 21:09:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:55.766 21:09:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:55.766 21:09:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:55.766 21:09:33 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.766 21:09:33 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:55.766 21:09:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:55.766 21:09:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.766 21:09:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:55.766 21:09:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:55.766 21:09:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:55.766 21:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.766 21:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.766 21:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.766 21:09:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:55.766 21:09:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:55.766 21:09:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:55.766 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.359 21:09:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:02.359 21:09:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:02.359 21:09:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:02.359 21:09:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:02.359 21:09:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:02.359 21:09:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:02.359 21:09:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:02.359 21:09:40 -- nvmf/common.sh@294 -- # net_devs=() 00:14:02.359 21:09:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:02.359 21:09:40 -- nvmf/common.sh@295 -- # e810=() 00:14:02.359 21:09:40 -- nvmf/common.sh@295 -- # local -ga e810 00:14:02.359 21:09:40 -- nvmf/common.sh@296 -- # x722=() 00:14:02.359 21:09:40 -- nvmf/common.sh@296 -- # local -ga x722 00:14:02.359 21:09:40 -- nvmf/common.sh@297 -- # mlx=() 00:14:02.359 21:09:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:02.359 21:09:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.359 21:09:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:02.359 21:09:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:02.359 21:09:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:02.359 21:09:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.359 21:09:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:02.359 21:09:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.359 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.359 21:09:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:02.359 21:09:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.359 21:09:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.359 21:09:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.359 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.359 21:09:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.359 21:09:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:02.359 21:09:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.359 21:09:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.359 21:09:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.359 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.359 21:09:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.359 21:09:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:02.359 21:09:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:02.359 21:09:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:02.359 21:09:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.359 21:09:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.359 21:09:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.359 21:09:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:02.359 21:09:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.359 21:09:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.359 21:09:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:02.359 21:09:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.359 21:09:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.359 21:09:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:02.359 21:09:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:02.359 21:09:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.359 21:09:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.621 21:09:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.621 21:09:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.621 21:09:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:02.621 21:09:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.621 21:09:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.621 21:09:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.882 21:09:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:02.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:14:02.882 00:14:02.882 --- 10.0.0.2 ping statistics --- 00:14:02.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.882 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:14:02.882 21:09:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:14:02.882 00:14:02.882 --- 10.0.0.1 ping statistics --- 00:14:02.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.882 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:14:02.882 21:09:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.882 21:09:40 -- nvmf/common.sh@410 -- # return 0 00:14:02.882 21:09:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:02.882 21:09:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.882 21:09:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:02.882 21:09:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:02.882 21:09:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.882 21:09:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:02.882 21:09:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:02.882 21:09:40 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:02.882 21:09:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:02.882 21:09:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:02.882 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:14:02.882 21:09:40 -- nvmf/common.sh@469 -- # nvmfpid=2291078 00:14:02.882 21:09:40 -- nvmf/common.sh@470 -- # waitforlisten 2291078 00:14:02.882 21:09:40 -- common/autotest_common.sh@819 -- # '[' -z 2291078 ']' 00:14:02.882 21:09:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:02.882 21:09:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.883 21:09:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.883 21:09:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.883 21:09:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.883 21:09:40 -- common/autotest_common.sh@10 -- # set +x 00:14:02.883 [2024-06-08 21:09:40.842936] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:02.883 [2024-06-08 21:09:40.842999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.883 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.883 [2024-06-08 21:09:40.930471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.144 [2024-06-08 21:09:41.020369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:03.144 [2024-06-08 21:09:41.020545] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.144 [2024-06-08 21:09:41.020556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.144 [2024-06-08 21:09:41.020566] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.144 [2024-06-08 21:09:41.020709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.144 [2024-06-08 21:09:41.020878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.144 [2024-06-08 21:09:41.020878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.716 21:09:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:03.716 21:09:41 -- common/autotest_common.sh@852 -- # return 0 00:14:03.716 21:09:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:03.716 21:09:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:03.716 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:14:03.716 21:09:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.716 21:09:41 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:03.716 21:09:41 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.716 [2024-06-08 21:09:41.802501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.977 21:09:41 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.978 21:09:41 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.239 [2024-06-08 21:09:42.135921] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.239 21:09:42 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.239 21:09:42 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:04.500 Malloc0 00:14:04.500 21:09:42 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:04.761 Delay0 00:14:04.761 21:09:42 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.761 21:09:42 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:05.022 NULL1 00:14:05.022 21:09:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.282 21:09:43 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:05.282 21:09:43 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2291453 00:14:05.282 21:09:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:05.282 21:09:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.282 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.224 Read completed with error (sct=0, sc=11) 00:14:06.224 21:09:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.485 21:09:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:06.485 21:09:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:06.746 true 00:14:06.746 21:09:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:06.746 21:09:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.687 21:09:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.687 21:09:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:07.687 21:09:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:07.948 true 00:14:07.948 21:09:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:07.948 21:09:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.948 21:09:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.217 21:09:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:08.217 21:09:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:08.217 true 00:14:08.217 21:09:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:08.217 21:09:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.484 21:09:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.744 21:09:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:08.744 21:09:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:08.744 true 00:14:08.744 21:09:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:08.744 21:09:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.004 21:09:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.004 21:09:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:09.004 21:09:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:09.267 true 00:14:09.267 21:09:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:09.267 21:09:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.567 21:09:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.567 21:09:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:09.567 21:09:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:09.828 true 00:14:09.828 21:09:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:09.828 21:09:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.828 21:09:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.089 21:09:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:10.089 21:09:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:10.089 true 00:14:10.350 21:09:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:10.350 21:09:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.350 21:09:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.611 21:09:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:10.611 21:09:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:10.611 true 00:14:10.611 21:09:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:10.611 21:09:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.554 21:09:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.814 21:09:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:11.814 21:09:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:12.076 true 00:14:12.076 21:09:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:12.076 21:09:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.076 21:09:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.337 21:09:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:12.337 21:09:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:12.598 true 00:14:12.598 21:09:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:12.598 21:09:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.598 21:09:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.859 21:09:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:12.859 21:09:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:12.859 true 00:14:13.120 21:09:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:13.120 21:09:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.120 21:09:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.381 21:09:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:13.381 21:09:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:13.381 true 00:14:13.381 21:09:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:13.381 21:09:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.641 21:09:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.902 21:09:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:13.903 21:09:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:13.903 true 00:14:13.903 21:09:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:13.903 21:09:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.164 21:09:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.164 21:09:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:14.164 21:09:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:14.424 true 00:14:14.424 21:09:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:14.424 21:09:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.685 21:09:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.685 21:09:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:14.685 21:09:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:14.947 true 00:14:14.947 21:09:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:14.947 21:09:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.889 21:09:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.889 21:09:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:15.889 21:09:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:16.150 true 00:14:16.150 21:09:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:16.150 21:09:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.411 21:09:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.411 21:09:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:16.411 21:09:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:16.671 true 00:14:16.671 21:09:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:16.671 21:09:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.671 21:09:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.932 21:09:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:16.932 21:09:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:17.193 true 00:14:17.193 21:09:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:17.193 21:09:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.193 21:09:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.455 21:09:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:17.455 21:09:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:17.455 true 00:14:17.455 21:09:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:17.455 21:09:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.715 21:09:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.975 21:09:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:17.975 21:09:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:17.975 true 00:14:17.975 21:09:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:17.975 21:09:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.918 21:09:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.179 21:09:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:19.179 21:09:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:19.440 true 00:14:19.440 21:09:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:19.440 21:09:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.440 21:09:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.700 21:09:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:19.700 21:09:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:19.960 true 00:14:19.960 21:09:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:19.960 21:09:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.960 21:09:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.220 21:09:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:20.220 21:09:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:20.221 true 00:14:20.221 21:09:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:20.221 21:09:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.481 21:09:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.742 21:09:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:20.742 21:09:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:20.742 true 00:14:20.742 21:09:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:20.742 21:09:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.006 21:09:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.267 21:09:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:21.267 21:09:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:21.267 true 00:14:21.267 21:09:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:21.267 21:09:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.527 21:09:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.527 21:09:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:21.527 21:09:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:21.788 true 00:14:21.788 21:09:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:21.788 21:09:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.048 21:09:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.048 21:10:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:22.048 21:10:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:22.309 true 00:14:22.309 21:10:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:22.309 21:10:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.570 21:10:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.570 21:10:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:22.570 21:10:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:22.830 true 00:14:22.830 21:10:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:22.830 21:10:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.830 21:10:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.089 21:10:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:23.089 21:10:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:23.349 true 00:14:23.349 21:10:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:23.349 21:10:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.349 21:10:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.610 21:10:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:23.610 21:10:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:23.610 true 00:14:23.871 21:10:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:23.871 21:10:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.871 21:10:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.132 21:10:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:24.132 21:10:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:24.132 true 00:14:24.132 21:10:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:24.132 21:10:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.518 21:10:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.518 21:10:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:25.518 21:10:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:25.518 true 00:14:25.518 21:10:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:25.518 21:10:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.778 21:10:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.778 21:10:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:25.778 21:10:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:26.039 true 00:14:26.039 21:10:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:26.039 21:10:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.303 21:10:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.303 21:10:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:26.303 21:10:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:26.600 true 00:14:26.600 21:10:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:26.600 21:10:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.600 21:10:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.861 21:10:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:26.861 21:10:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:26.861 true 00:14:27.121 21:10:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:27.121 21:10:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.121 21:10:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.382 21:10:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:27.382 21:10:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:27.382 true 00:14:27.382 21:10:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:27.382 21:10:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.325 21:10:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.585 21:10:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:28.585 21:10:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:28.845 true 00:14:28.845 21:10:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:28.845 21:10:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.845 21:10:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.106 21:10:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:29.106 21:10:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:29.106 true 00:14:29.367 21:10:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:29.367 21:10:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.367 21:10:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.628 21:10:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:29.628 21:10:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:29.628 true 00:14:29.628 21:10:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:29.628 21:10:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.889 21:10:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.150 21:10:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:30.150 21:10:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:30.150 true 00:14:30.150 21:10:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:30.150 21:10:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.411 21:10:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.671 21:10:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:30.671 21:10:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:30.671 true 00:14:30.671 21:10:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:30.671 21:10:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.931 21:10:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.193 21:10:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:31.193 21:10:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:31.193 true 00:14:31.193 21:10:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:31.193 21:10:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.454 21:10:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.454 21:10:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:31.454 21:10:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:31.715 true 00:14:31.715 21:10:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:31.715 21:10:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.977 21:10:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.977 21:10:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:31.977 21:10:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:32.238 true 00:14:32.238 21:10:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:32.238 21:10:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.499 21:10:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.499 21:10:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:32.499 21:10:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:32.760 true 00:14:32.760 21:10:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:32.760 21:10:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.021 21:10:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.021 21:10:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:33.021 21:10:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:33.282 true 00:14:33.282 21:10:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:33.282 21:10:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.282 21:10:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.543 21:10:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:33.543 21:10:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:33.803 true 00:14:33.803 21:10:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:33.803 21:10:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.803 21:10:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.064 21:10:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:34.064 21:10:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:34.325 true 00:14:34.326 21:10:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:34.326 21:10:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.326 21:10:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.587 21:10:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:34.587 21:10:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:34.587 true 00:14:34.587 21:10:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:34.587 21:10:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.529 Initializing NVMe Controllers 00:14:35.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.529 Controller IO queue size 128, less than required. 00:14:35.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.529 Controller IO queue size 128, less than required. 00:14:35.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:35.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.529 Initialization complete. Launching workers. 00:14:35.529 ======================================================== 00:14:35.529 Latency(us) 00:14:35.529 Device Information : IOPS MiB/s Average min max 00:14:35.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 275.36 0.13 134202.89 2861.49 1135027.22 00:14:35.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9693.41 4.73 13204.90 1621.59 398500.67 00:14:35.529 ======================================================== 00:14:35.529 Total : 9968.77 4.87 16547.18 1621.59 1135027.22 00:14:35.529 00:14:35.529 21:10:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.790 21:10:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:35.790 21:10:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:35.790 true 00:14:35.790 21:10:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2291453 00:14:35.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2291453) - No such process 00:14:35.790 21:10:13 -- target/ns_hotplug_stress.sh@53 -- # wait 2291453 00:14:35.790 21:10:13 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.050 21:10:14 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:36.312 null0 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.312 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:36.573 null1 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:36.573 null2 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.573 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:36.833 null3 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:36.833 null4 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.833 21:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:37.094 null5 00:14:37.094 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.094 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.094 21:10:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:37.356 null6 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:37.356 null7 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@66 -- # wait 2298034 2298035 2298037 2298039 2298041 2298043 2298046 2298049 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.356 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.618 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:37.879 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.140 21:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:38.140 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.402 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:38.663 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.923 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.923 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.923 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.924 21:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:38.924 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.184 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.185 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.446 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.707 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.708 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.708 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.969 21:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.969 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.231 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:40.491 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.492 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.752 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:41.012 21:10:18 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:41.012 21:10:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.012 21:10:18 -- nvmf/common.sh@116 -- # sync 00:14:41.012 21:10:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.012 21:10:18 -- nvmf/common.sh@119 -- # set +e 00:14:41.012 21:10:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.012 21:10:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.012 rmmod nvme_tcp 00:14:41.012 rmmod nvme_fabrics 00:14:41.012 rmmod nvme_keyring 00:14:41.012 21:10:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.012 21:10:19 -- nvmf/common.sh@123 -- # set -e 00:14:41.012 21:10:19 -- nvmf/common.sh@124 -- # return 0 00:14:41.012 21:10:19 -- nvmf/common.sh@477 -- # '[' -n 2291078 ']' 00:14:41.012 21:10:19 -- nvmf/common.sh@478 -- # killprocess 2291078 00:14:41.012 21:10:19 -- common/autotest_common.sh@926 -- # '[' -z 2291078 ']' 00:14:41.012 21:10:19 -- common/autotest_common.sh@930 -- # kill -0 2291078 00:14:41.012 21:10:19 -- common/autotest_common.sh@931 -- # uname 00:14:41.012 21:10:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.012 21:10:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2291078 00:14:41.012 21:10:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.012 21:10:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.012 21:10:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2291078' 00:14:41.012 killing process with pid 2291078 00:14:41.012 21:10:19 -- common/autotest_common.sh@945 -- # kill 2291078 00:14:41.012 21:10:19 -- common/autotest_common.sh@950 -- # wait 2291078 00:14:41.273 21:10:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.273 21:10:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.273 21:10:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.273 21:10:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.273 21:10:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.273 21:10:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.273 21:10:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.273 21:10:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.190 21:10:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:43.190 00:14:43.190 real 0m47.730s 00:14:43.190 user 3m6.297s 00:14:43.190 sys 0m16.548s 00:14:43.190 21:10:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.190 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.190 ************************************ 00:14:43.190 END TEST nvmf_ns_hotplug_stress 00:14:43.190 ************************************ 00:14:43.470 21:10:21 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:43.470 21:10:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:43.470 21:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.470 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.470 ************************************ 00:14:43.470 START TEST nvmf_connect_stress 00:14:43.470 ************************************ 00:14:43.470 21:10:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:43.470 * Looking for test storage... 00:14:43.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.470 21:10:21 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.470 21:10:21 -- nvmf/common.sh@7 -- # uname -s 00:14:43.470 21:10:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.470 21:10:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.470 21:10:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.470 21:10:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.470 21:10:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.470 21:10:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.470 21:10:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.470 21:10:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.470 21:10:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.470 21:10:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.470 21:10:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.470 21:10:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.470 21:10:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.470 21:10:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.470 21:10:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.470 21:10:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.470 21:10:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.470 21:10:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.470 21:10:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.470 21:10:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.470 21:10:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.470 21:10:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.470 21:10:21 -- paths/export.sh@5 -- # export PATH 00:14:43.470 21:10:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.470 21:10:21 -- nvmf/common.sh@46 -- # : 0 00:14:43.470 21:10:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:43.470 21:10:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:43.470 21:10:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:43.470 21:10:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.470 21:10:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.470 21:10:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:43.470 21:10:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:43.470 21:10:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:43.470 21:10:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:43.470 21:10:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:43.470 21:10:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.470 21:10:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:43.470 21:10:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:43.470 21:10:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:43.470 21:10:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.470 21:10:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.470 21:10:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.470 21:10:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:43.470 21:10:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:43.470 21:10:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:43.470 21:10:21 -- common/autotest_common.sh@10 -- # set +x 00:14:50.095 21:10:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:50.095 21:10:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:50.095 21:10:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:50.095 21:10:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:50.095 21:10:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:50.095 21:10:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:50.095 21:10:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:50.095 21:10:28 -- nvmf/common.sh@294 -- # net_devs=() 00:14:50.095 21:10:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:50.095 21:10:28 -- nvmf/common.sh@295 -- # e810=() 00:14:50.095 21:10:28 -- nvmf/common.sh@295 -- # local -ga e810 00:14:50.095 21:10:28 -- nvmf/common.sh@296 -- # x722=() 00:14:50.095 21:10:28 -- nvmf/common.sh@296 -- # local -ga x722 00:14:50.095 21:10:28 -- nvmf/common.sh@297 -- # mlx=() 00:14:50.095 21:10:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:50.095 21:10:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.095 21:10:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:50.095 21:10:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:50.095 21:10:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:50.095 21:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:50.095 21:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:50.095 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:50.095 21:10:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:50.095 21:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:50.095 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:50.095 21:10:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:50.095 21:10:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:50.096 21:10:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:50.096 21:10:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:50.096 21:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:50.096 21:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.096 21:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:50.096 21:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.096 21:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:50.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:50.096 21:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.096 21:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:50.096 21:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.096 21:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:50.096 21:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.096 21:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:50.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:50.096 21:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.096 21:10:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:50.096 21:10:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:50.096 21:10:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:50.096 21:10:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:50.096 21:10:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:50.096 21:10:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.096 21:10:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.096 21:10:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.096 21:10:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:50.096 21:10:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.096 21:10:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.096 21:10:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:50.096 21:10:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.096 21:10:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.096 21:10:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:50.357 21:10:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:50.357 21:10:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.357 21:10:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.357 21:10:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.357 21:10:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.357 21:10:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:50.357 21:10:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.618 21:10:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.618 21:10:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.618 21:10:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:50.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:14:50.619 00:14:50.619 --- 10.0.0.2 ping statistics --- 00:14:50.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.619 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:14:50.619 21:10:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:14:50.619 00:14:50.619 --- 10.0.0.1 ping statistics --- 00:14:50.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.619 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:50.619 21:10:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.619 21:10:28 -- nvmf/common.sh@410 -- # return 0 00:14:50.619 21:10:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:50.619 21:10:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.619 21:10:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:50.619 21:10:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:50.619 21:10:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.619 21:10:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:50.619 21:10:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:50.619 21:10:28 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:50.619 21:10:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:50.619 21:10:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:50.619 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.619 21:10:28 -- nvmf/common.sh@469 -- # nvmfpid=2303214 00:14:50.619 21:10:28 -- nvmf/common.sh@470 -- # waitforlisten 2303214 00:14:50.619 21:10:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:50.619 21:10:28 -- common/autotest_common.sh@819 -- # '[' -z 2303214 ']' 00:14:50.619 21:10:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.619 21:10:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:50.619 21:10:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.619 21:10:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:50.619 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.619 [2024-06-08 21:10:28.602370] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:50.619 [2024-06-08 21:10:28.602433] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.619 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.619 [2024-06-08 21:10:28.687489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.880 [2024-06-08 21:10:28.777962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:50.880 [2024-06-08 21:10:28.778134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.881 [2024-06-08 21:10:28.778145] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.881 [2024-06-08 21:10:28.778152] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.881 [2024-06-08 21:10:28.778289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.881 [2024-06-08 21:10:28.778462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.881 [2024-06-08 21:10:28.778508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.453 21:10:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:51.453 21:10:29 -- common/autotest_common.sh@852 -- # return 0 00:14:51.453 21:10:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:51.453 21:10:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:51.453 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.453 21:10:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.453 21:10:29 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.453 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.453 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.453 [2024-06-08 21:10:29.423981] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.453 21:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.453 21:10:29 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:51.453 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.453 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.453 21:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.453 21:10:29 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.453 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.453 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.453 [2024-06-08 21:10:29.459559] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.453 21:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.453 21:10:29 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:51.453 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.453 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.453 NULL1 00:14:51.453 21:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.453 21:10:29 -- target/connect_stress.sh@21 -- # PERF_PID=2303331 00:14:51.453 21:10:29 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:51.453 21:10:29 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:51.453 21:10:29 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.453 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.453 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:51.715 21:10:29 -- target/connect_stress.sh@28 -- # cat 00:14:51.715 21:10:29 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:51.715 21:10:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.715 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.715 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.976 21:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.976 21:10:29 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:51.976 21:10:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.976 21:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.976 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:52.237 21:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.237 21:10:30 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:52.237 21:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.237 21:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.237 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:14:52.498 21:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.498 21:10:30 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:52.498 21:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.498 21:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.498 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.069 21:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.070 21:10:30 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:53.070 21:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.070 21:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.070 21:10:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.330 21:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.331 21:10:31 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:53.331 21:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.331 21:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.331 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:14:53.592 21:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.592 21:10:31 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:53.592 21:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.592 21:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.592 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:14:53.853 21:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.853 21:10:31 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:53.853 21:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.853 21:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.853 21:10:31 -- common/autotest_common.sh@10 -- # set +x 00:14:54.114 21:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.114 21:10:32 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:54.114 21:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.114 21:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.114 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:14:54.687 21:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.687 21:10:32 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:54.687 21:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.687 21:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.687 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:14:54.948 21:10:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.948 21:10:32 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:54.948 21:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.948 21:10:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.948 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:14:55.209 21:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.209 21:10:33 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:55.209 21:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.209 21:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.209 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:55.469 21:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.469 21:10:33 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:55.470 21:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.470 21:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.470 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:55.731 21:10:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.731 21:10:33 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:55.731 21:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.731 21:10:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.731 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.302 21:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.302 21:10:34 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:56.302 21:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.302 21:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.302 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:56.563 21:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.563 21:10:34 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:56.563 21:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.563 21:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.563 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 21:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.825 21:10:34 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:56.825 21:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.825 21:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.825 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:57.086 21:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.086 21:10:35 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:57.086 21:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.086 21:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.086 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.347 21:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.347 21:10:35 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:57.347 21:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.347 21:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.347 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 21:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.918 21:10:35 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:57.918 21:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.918 21:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.918 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:14:58.178 21:10:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.178 21:10:36 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:58.178 21:10:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.178 21:10:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.178 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:14:58.438 21:10:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.438 21:10:36 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:58.438 21:10:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.438 21:10:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.438 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:14:58.698 21:10:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.698 21:10:36 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:58.698 21:10:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.698 21:10:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.698 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:14:59.271 21:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.271 21:10:37 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:59.271 21:10:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.271 21:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.271 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:59.532 21:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.532 21:10:37 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:59.532 21:10:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.532 21:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.532 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:59.793 21:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.793 21:10:37 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:14:59.793 21:10:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.793 21:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.793 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:00.054 21:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.054 21:10:38 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:00.054 21:10:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.054 21:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.054 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:00.315 21:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.316 21:10:38 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:00.316 21:10:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.316 21:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.316 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:00.886 21:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.886 21:10:38 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:00.886 21:10:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.886 21:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.886 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 21:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.146 21:10:39 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:01.146 21:10:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.146 21:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.146 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:15:01.407 21:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.407 21:10:39 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:01.407 21:10:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.407 21:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.407 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:15:01.668 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.668 21:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.668 21:10:39 -- target/connect_stress.sh@34 -- # kill -0 2303331 00:15:01.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2303331) - No such process 00:15:01.668 21:10:39 -- target/connect_stress.sh@38 -- # wait 2303331 00:15:01.668 21:10:39 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:01.668 21:10:39 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:01.668 21:10:39 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:01.668 21:10:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.668 21:10:39 -- nvmf/common.sh@116 -- # sync 00:15:01.668 21:10:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.668 21:10:39 -- nvmf/common.sh@119 -- # set +e 00:15:01.668 21:10:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.668 21:10:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.668 rmmod nvme_tcp 00:15:01.668 rmmod nvme_fabrics 00:15:01.668 rmmod nvme_keyring 00:15:01.668 21:10:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.668 21:10:39 -- nvmf/common.sh@123 -- # set -e 00:15:01.668 21:10:39 -- nvmf/common.sh@124 -- # return 0 00:15:01.668 21:10:39 -- nvmf/common.sh@477 -- # '[' -n 2303214 ']' 00:15:01.668 21:10:39 -- nvmf/common.sh@478 -- # killprocess 2303214 00:15:01.668 21:10:39 -- common/autotest_common.sh@926 -- # '[' -z 2303214 ']' 00:15:01.668 21:10:39 -- common/autotest_common.sh@930 -- # kill -0 2303214 00:15:01.668 21:10:39 -- common/autotest_common.sh@931 -- # uname 00:15:01.929 21:10:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:01.929 21:10:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2303214 00:15:01.929 21:10:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:01.929 21:10:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:01.929 21:10:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2303214' 00:15:01.929 killing process with pid 2303214 00:15:01.929 21:10:39 -- common/autotest_common.sh@945 -- # kill 2303214 00:15:01.929 21:10:39 -- common/autotest_common.sh@950 -- # wait 2303214 00:15:01.929 21:10:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:01.929 21:10:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:01.929 21:10:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:01.929 21:10:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.929 21:10:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:01.929 21:10:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.929 21:10:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.929 21:10:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.472 21:10:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.472 00:15:04.472 real 0m20.690s 00:15:04.472 user 0m41.809s 00:15:04.472 sys 0m8.579s 00:15:04.472 21:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.472 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:15:04.472 ************************************ 00:15:04.472 END TEST nvmf_connect_stress 00:15:04.472 ************************************ 00:15:04.472 21:10:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:04.473 21:10:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.473 21:10:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.473 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:04.473 ************************************ 00:15:04.473 START TEST nvmf_fused_ordering 00:15:04.473 ************************************ 00:15:04.473 21:10:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:04.473 * Looking for test storage... 00:15:04.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.473 21:10:42 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.473 21:10:42 -- nvmf/common.sh@7 -- # uname -s 00:15:04.473 21:10:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.473 21:10:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.473 21:10:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.473 21:10:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.473 21:10:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.473 21:10:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.473 21:10:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.473 21:10:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.473 21:10:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.473 21:10:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.473 21:10:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.473 21:10:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.473 21:10:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.473 21:10:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.473 21:10:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.473 21:10:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.473 21:10:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.473 21:10:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.473 21:10:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.473 21:10:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.473 21:10:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.473 21:10:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.473 21:10:42 -- paths/export.sh@5 -- # export PATH 00:15:04.473 21:10:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.473 21:10:42 -- nvmf/common.sh@46 -- # : 0 00:15:04.473 21:10:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.473 21:10:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.473 21:10:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.473 21:10:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.473 21:10:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.473 21:10:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.473 21:10:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.473 21:10:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.473 21:10:42 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:04.473 21:10:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.473 21:10:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.473 21:10:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.473 21:10:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.473 21:10:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.473 21:10:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.473 21:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.473 21:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.473 21:10:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.473 21:10:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.473 21:10:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.473 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.099 21:10:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.099 21:10:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.099 21:10:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.099 21:10:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.099 21:10:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.099 21:10:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.099 21:10:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.099 21:10:48 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.099 21:10:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.099 21:10:48 -- nvmf/common.sh@295 -- # e810=() 00:15:11.099 21:10:48 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.099 21:10:48 -- nvmf/common.sh@296 -- # x722=() 00:15:11.099 21:10:48 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.099 21:10:48 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.099 21:10:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.099 21:10:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.099 21:10:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.099 21:10:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:11.099 21:10:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.099 21:10:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:11.099 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:11.099 21:10:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.099 21:10:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:11.099 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:11.099 21:10:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.099 21:10:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.099 21:10:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.099 21:10:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:11.099 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:11.099 21:10:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.099 21:10:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.099 21:10:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.099 21:10:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.099 21:10:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:11.099 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:11.099 21:10:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.099 21:10:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.099 21:10:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:11.099 21:10:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:11.099 21:10:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.099 21:10:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.099 21:10:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.099 21:10:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:11.099 21:10:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.099 21:10:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.099 21:10:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:11.099 21:10:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.099 21:10:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.099 21:10:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:11.099 21:10:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:11.099 21:10:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.099 21:10:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.099 21:10:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.099 21:10:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.099 21:10:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:11.099 21:10:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.361 21:10:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.361 21:10:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.361 21:10:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:11.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:15:11.361 00:15:11.361 --- 10.0.0.2 ping statistics --- 00:15:11.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.361 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:15:11.361 21:10:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:15:11.361 00:15:11.361 --- 10.0.0.1 ping statistics --- 00:15:11.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.361 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:15:11.361 21:10:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.361 21:10:49 -- nvmf/common.sh@410 -- # return 0 00:15:11.361 21:10:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.361 21:10:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.361 21:10:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.361 21:10:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.361 21:10:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.361 21:10:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.361 21:10:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.361 21:10:49 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:11.361 21:10:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.361 21:10:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.361 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:11.361 21:10:49 -- nvmf/common.sh@469 -- # nvmfpid=2309633 00:15:11.361 21:10:49 -- nvmf/common.sh@470 -- # waitforlisten 2309633 00:15:11.361 21:10:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.361 21:10:49 -- common/autotest_common.sh@819 -- # '[' -z 2309633 ']' 00:15:11.361 21:10:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.361 21:10:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.361 21:10:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.361 21:10:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.361 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:11.361 [2024-06-08 21:10:49.357231] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:11.361 [2024-06-08 21:10:49.357292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.361 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.361 [2024-06-08 21:10:49.441668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.623 [2024-06-08 21:10:49.534304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.623 [2024-06-08 21:10:49.534458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.623 [2024-06-08 21:10:49.534469] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.623 [2024-06-08 21:10:49.534477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.623 [2024-06-08 21:10:49.534502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.197 21:10:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.197 21:10:50 -- common/autotest_common.sh@852 -- # return 0 00:15:12.197 21:10:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.197 21:10:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 21:10:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.197 21:10:50 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 [2024-06-08 21:10:50.189838] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 [2024-06-08 21:10:50.214066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 NULL1 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:12.197 21:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.197 21:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:12.197 21:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.197 21:10:50 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:12.197 [2024-06-08 21:10:50.282655] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:12.197 [2024-06-08 21:10:50.282721] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309836 ] 00:15:12.458 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.722 Attached to nqn.2016-06.io.spdk:cnode1 00:15:12.722 Namespace ID: 1 size: 1GB 00:15:12.722 fused_ordering(0) 00:15:12.722 fused_ordering(1) 00:15:12.722 fused_ordering(2) 00:15:12.722 fused_ordering(3) 00:15:12.722 fused_ordering(4) 00:15:12.722 fused_ordering(5) 00:15:12.722 fused_ordering(6) 00:15:12.722 fused_ordering(7) 00:15:12.722 fused_ordering(8) 00:15:12.722 fused_ordering(9) 00:15:12.722 fused_ordering(10) 00:15:12.722 fused_ordering(11) 00:15:12.722 fused_ordering(12) 00:15:12.722 fused_ordering(13) 00:15:12.722 fused_ordering(14) 00:15:12.722 fused_ordering(15) 00:15:12.722 fused_ordering(16) 00:15:12.722 fused_ordering(17) 00:15:12.722 fused_ordering(18) 00:15:12.722 fused_ordering(19) 00:15:12.722 fused_ordering(20) 00:15:12.722 fused_ordering(21) 00:15:12.722 fused_ordering(22) 00:15:12.722 fused_ordering(23) 00:15:12.722 fused_ordering(24) 00:15:12.722 fused_ordering(25) 00:15:12.722 fused_ordering(26) 00:15:12.722 fused_ordering(27) 00:15:12.722 fused_ordering(28) 00:15:12.722 fused_ordering(29) 00:15:12.722 fused_ordering(30) 00:15:12.722 fused_ordering(31) 00:15:12.722 fused_ordering(32) 00:15:12.722 fused_ordering(33) 00:15:12.722 fused_ordering(34) 00:15:12.722 fused_ordering(35) 00:15:12.722 fused_ordering(36) 00:15:12.722 fused_ordering(37) 00:15:12.722 fused_ordering(38) 00:15:12.722 fused_ordering(39) 00:15:12.722 fused_ordering(40) 00:15:12.722 fused_ordering(41) 00:15:12.722 fused_ordering(42) 00:15:12.722 fused_ordering(43) 00:15:12.722 fused_ordering(44) 00:15:12.722 fused_ordering(45) 00:15:12.722 fused_ordering(46) 00:15:12.722 fused_ordering(47) 00:15:12.722 fused_ordering(48) 00:15:12.722 fused_ordering(49) 00:15:12.722 fused_ordering(50) 00:15:12.722 fused_ordering(51) 00:15:12.722 fused_ordering(52) 00:15:12.722 fused_ordering(53) 00:15:12.722 fused_ordering(54) 00:15:12.722 fused_ordering(55) 00:15:12.722 fused_ordering(56) 00:15:12.722 fused_ordering(57) 00:15:12.722 fused_ordering(58) 00:15:12.722 fused_ordering(59) 00:15:12.722 fused_ordering(60) 00:15:12.722 fused_ordering(61) 00:15:12.722 fused_ordering(62) 00:15:12.722 fused_ordering(63) 00:15:12.722 fused_ordering(64) 00:15:12.722 fused_ordering(65) 00:15:12.722 fused_ordering(66) 00:15:12.722 fused_ordering(67) 00:15:12.722 fused_ordering(68) 00:15:12.722 fused_ordering(69) 00:15:12.722 fused_ordering(70) 00:15:12.722 fused_ordering(71) 00:15:12.722 fused_ordering(72) 00:15:12.722 fused_ordering(73) 00:15:12.722 fused_ordering(74) 00:15:12.722 fused_ordering(75) 00:15:12.722 fused_ordering(76) 00:15:12.722 fused_ordering(77) 00:15:12.722 fused_ordering(78) 00:15:12.722 fused_ordering(79) 00:15:12.722 fused_ordering(80) 00:15:12.722 fused_ordering(81) 00:15:12.722 fused_ordering(82) 00:15:12.722 fused_ordering(83) 00:15:12.722 fused_ordering(84) 00:15:12.722 fused_ordering(85) 00:15:12.722 fused_ordering(86) 00:15:12.722 fused_ordering(87) 00:15:12.722 fused_ordering(88) 00:15:12.722 fused_ordering(89) 00:15:12.722 fused_ordering(90) 00:15:12.722 fused_ordering(91) 00:15:12.723 fused_ordering(92) 00:15:12.723 fused_ordering(93) 00:15:12.723 fused_ordering(94) 00:15:12.723 fused_ordering(95) 00:15:12.723 fused_ordering(96) 00:15:12.723 fused_ordering(97) 00:15:12.723 fused_ordering(98) 00:15:12.723 fused_ordering(99) 00:15:12.723 fused_ordering(100) 00:15:12.723 fused_ordering(101) 00:15:12.723 fused_ordering(102) 00:15:12.723 fused_ordering(103) 00:15:12.723 fused_ordering(104) 00:15:12.723 fused_ordering(105) 00:15:12.723 fused_ordering(106) 00:15:12.723 fused_ordering(107) 00:15:12.723 fused_ordering(108) 00:15:12.723 fused_ordering(109) 00:15:12.723 fused_ordering(110) 00:15:12.723 fused_ordering(111) 00:15:12.723 fused_ordering(112) 00:15:12.723 fused_ordering(113) 00:15:12.723 fused_ordering(114) 00:15:12.723 fused_ordering(115) 00:15:12.723 fused_ordering(116) 00:15:12.723 fused_ordering(117) 00:15:12.723 fused_ordering(118) 00:15:12.723 fused_ordering(119) 00:15:12.723 fused_ordering(120) 00:15:12.723 fused_ordering(121) 00:15:12.723 fused_ordering(122) 00:15:12.723 fused_ordering(123) 00:15:12.723 fused_ordering(124) 00:15:12.723 fused_ordering(125) 00:15:12.723 fused_ordering(126) 00:15:12.723 fused_ordering(127) 00:15:12.723 fused_ordering(128) 00:15:12.723 fused_ordering(129) 00:15:12.723 fused_ordering(130) 00:15:12.723 fused_ordering(131) 00:15:12.723 fused_ordering(132) 00:15:12.723 fused_ordering(133) 00:15:12.723 fused_ordering(134) 00:15:12.723 fused_ordering(135) 00:15:12.723 fused_ordering(136) 00:15:12.723 fused_ordering(137) 00:15:12.723 fused_ordering(138) 00:15:12.723 fused_ordering(139) 00:15:12.723 fused_ordering(140) 00:15:12.723 fused_ordering(141) 00:15:12.723 fused_ordering(142) 00:15:12.723 fused_ordering(143) 00:15:12.723 fused_ordering(144) 00:15:12.723 fused_ordering(145) 00:15:12.723 fused_ordering(146) 00:15:12.723 fused_ordering(147) 00:15:12.723 fused_ordering(148) 00:15:12.723 fused_ordering(149) 00:15:12.723 fused_ordering(150) 00:15:12.723 fused_ordering(151) 00:15:12.723 fused_ordering(152) 00:15:12.723 fused_ordering(153) 00:15:12.723 fused_ordering(154) 00:15:12.723 fused_ordering(155) 00:15:12.723 fused_ordering(156) 00:15:12.723 fused_ordering(157) 00:15:12.723 fused_ordering(158) 00:15:12.723 fused_ordering(159) 00:15:12.723 fused_ordering(160) 00:15:12.723 fused_ordering(161) 00:15:12.723 fused_ordering(162) 00:15:12.723 fused_ordering(163) 00:15:12.723 fused_ordering(164) 00:15:12.723 fused_ordering(165) 00:15:12.723 fused_ordering(166) 00:15:12.723 fused_ordering(167) 00:15:12.723 fused_ordering(168) 00:15:12.723 fused_ordering(169) 00:15:12.723 fused_ordering(170) 00:15:12.723 fused_ordering(171) 00:15:12.723 fused_ordering(172) 00:15:12.723 fused_ordering(173) 00:15:12.723 fused_ordering(174) 00:15:12.723 fused_ordering(175) 00:15:12.723 fused_ordering(176) 00:15:12.723 fused_ordering(177) 00:15:12.723 fused_ordering(178) 00:15:12.723 fused_ordering(179) 00:15:12.723 fused_ordering(180) 00:15:12.723 fused_ordering(181) 00:15:12.723 fused_ordering(182) 00:15:12.723 fused_ordering(183) 00:15:12.723 fused_ordering(184) 00:15:12.723 fused_ordering(185) 00:15:12.723 fused_ordering(186) 00:15:12.723 fused_ordering(187) 00:15:12.723 fused_ordering(188) 00:15:12.723 fused_ordering(189) 00:15:12.723 fused_ordering(190) 00:15:12.723 fused_ordering(191) 00:15:12.723 fused_ordering(192) 00:15:12.723 fused_ordering(193) 00:15:12.723 fused_ordering(194) 00:15:12.723 fused_ordering(195) 00:15:12.723 fused_ordering(196) 00:15:12.723 fused_ordering(197) 00:15:12.723 fused_ordering(198) 00:15:12.723 fused_ordering(199) 00:15:12.723 fused_ordering(200) 00:15:12.723 fused_ordering(201) 00:15:12.723 fused_ordering(202) 00:15:12.723 fused_ordering(203) 00:15:12.723 fused_ordering(204) 00:15:12.723 fused_ordering(205) 00:15:13.294 fused_ordering(206) 00:15:13.294 fused_ordering(207) 00:15:13.294 fused_ordering(208) 00:15:13.294 fused_ordering(209) 00:15:13.294 fused_ordering(210) 00:15:13.294 fused_ordering(211) 00:15:13.294 fused_ordering(212) 00:15:13.294 fused_ordering(213) 00:15:13.294 fused_ordering(214) 00:15:13.294 fused_ordering(215) 00:15:13.294 fused_ordering(216) 00:15:13.294 fused_ordering(217) 00:15:13.294 fused_ordering(218) 00:15:13.294 fused_ordering(219) 00:15:13.294 fused_ordering(220) 00:15:13.294 fused_ordering(221) 00:15:13.294 fused_ordering(222) 00:15:13.294 fused_ordering(223) 00:15:13.294 fused_ordering(224) 00:15:13.294 fused_ordering(225) 00:15:13.294 fused_ordering(226) 00:15:13.294 fused_ordering(227) 00:15:13.294 fused_ordering(228) 00:15:13.294 fused_ordering(229) 00:15:13.294 fused_ordering(230) 00:15:13.294 fused_ordering(231) 00:15:13.294 fused_ordering(232) 00:15:13.294 fused_ordering(233) 00:15:13.294 fused_ordering(234) 00:15:13.294 fused_ordering(235) 00:15:13.294 fused_ordering(236) 00:15:13.294 fused_ordering(237) 00:15:13.294 fused_ordering(238) 00:15:13.294 fused_ordering(239) 00:15:13.294 fused_ordering(240) 00:15:13.294 fused_ordering(241) 00:15:13.294 fused_ordering(242) 00:15:13.294 fused_ordering(243) 00:15:13.294 fused_ordering(244) 00:15:13.294 fused_ordering(245) 00:15:13.294 fused_ordering(246) 00:15:13.294 fused_ordering(247) 00:15:13.294 fused_ordering(248) 00:15:13.294 fused_ordering(249) 00:15:13.294 fused_ordering(250) 00:15:13.294 fused_ordering(251) 00:15:13.294 fused_ordering(252) 00:15:13.294 fused_ordering(253) 00:15:13.294 fused_ordering(254) 00:15:13.294 fused_ordering(255) 00:15:13.294 fused_ordering(256) 00:15:13.294 fused_ordering(257) 00:15:13.294 fused_ordering(258) 00:15:13.294 fused_ordering(259) 00:15:13.294 fused_ordering(260) 00:15:13.294 fused_ordering(261) 00:15:13.294 fused_ordering(262) 00:15:13.294 fused_ordering(263) 00:15:13.294 fused_ordering(264) 00:15:13.294 fused_ordering(265) 00:15:13.294 fused_ordering(266) 00:15:13.294 fused_ordering(267) 00:15:13.295 fused_ordering(268) 00:15:13.295 fused_ordering(269) 00:15:13.295 fused_ordering(270) 00:15:13.295 fused_ordering(271) 00:15:13.295 fused_ordering(272) 00:15:13.295 fused_ordering(273) 00:15:13.295 fused_ordering(274) 00:15:13.295 fused_ordering(275) 00:15:13.295 fused_ordering(276) 00:15:13.295 fused_ordering(277) 00:15:13.295 fused_ordering(278) 00:15:13.295 fused_ordering(279) 00:15:13.295 fused_ordering(280) 00:15:13.295 fused_ordering(281) 00:15:13.295 fused_ordering(282) 00:15:13.295 fused_ordering(283) 00:15:13.295 fused_ordering(284) 00:15:13.295 fused_ordering(285) 00:15:13.295 fused_ordering(286) 00:15:13.295 fused_ordering(287) 00:15:13.295 fused_ordering(288) 00:15:13.295 fused_ordering(289) 00:15:13.295 fused_ordering(290) 00:15:13.295 fused_ordering(291) 00:15:13.295 fused_ordering(292) 00:15:13.295 fused_ordering(293) 00:15:13.295 fused_ordering(294) 00:15:13.295 fused_ordering(295) 00:15:13.295 fused_ordering(296) 00:15:13.295 fused_ordering(297) 00:15:13.295 fused_ordering(298) 00:15:13.295 fused_ordering(299) 00:15:13.295 fused_ordering(300) 00:15:13.295 fused_ordering(301) 00:15:13.295 fused_ordering(302) 00:15:13.295 fused_ordering(303) 00:15:13.295 fused_ordering(304) 00:15:13.295 fused_ordering(305) 00:15:13.295 fused_ordering(306) 00:15:13.295 fused_ordering(307) 00:15:13.295 fused_ordering(308) 00:15:13.295 fused_ordering(309) 00:15:13.295 fused_ordering(310) 00:15:13.295 fused_ordering(311) 00:15:13.295 fused_ordering(312) 00:15:13.295 fused_ordering(313) 00:15:13.295 fused_ordering(314) 00:15:13.295 fused_ordering(315) 00:15:13.295 fused_ordering(316) 00:15:13.295 fused_ordering(317) 00:15:13.295 fused_ordering(318) 00:15:13.295 fused_ordering(319) 00:15:13.295 fused_ordering(320) 00:15:13.295 fused_ordering(321) 00:15:13.295 fused_ordering(322) 00:15:13.295 fused_ordering(323) 00:15:13.295 fused_ordering(324) 00:15:13.295 fused_ordering(325) 00:15:13.295 fused_ordering(326) 00:15:13.295 fused_ordering(327) 00:15:13.295 fused_ordering(328) 00:15:13.295 fused_ordering(329) 00:15:13.295 fused_ordering(330) 00:15:13.295 fused_ordering(331) 00:15:13.295 fused_ordering(332) 00:15:13.295 fused_ordering(333) 00:15:13.295 fused_ordering(334) 00:15:13.295 fused_ordering(335) 00:15:13.295 fused_ordering(336) 00:15:13.295 fused_ordering(337) 00:15:13.295 fused_ordering(338) 00:15:13.295 fused_ordering(339) 00:15:13.295 fused_ordering(340) 00:15:13.295 fused_ordering(341) 00:15:13.295 fused_ordering(342) 00:15:13.295 fused_ordering(343) 00:15:13.295 fused_ordering(344) 00:15:13.295 fused_ordering(345) 00:15:13.295 fused_ordering(346) 00:15:13.295 fused_ordering(347) 00:15:13.295 fused_ordering(348) 00:15:13.295 fused_ordering(349) 00:15:13.295 fused_ordering(350) 00:15:13.295 fused_ordering(351) 00:15:13.295 fused_ordering(352) 00:15:13.295 fused_ordering(353) 00:15:13.295 fused_ordering(354) 00:15:13.295 fused_ordering(355) 00:15:13.295 fused_ordering(356) 00:15:13.295 fused_ordering(357) 00:15:13.295 fused_ordering(358) 00:15:13.295 fused_ordering(359) 00:15:13.295 fused_ordering(360) 00:15:13.295 fused_ordering(361) 00:15:13.295 fused_ordering(362) 00:15:13.295 fused_ordering(363) 00:15:13.295 fused_ordering(364) 00:15:13.295 fused_ordering(365) 00:15:13.295 fused_ordering(366) 00:15:13.295 fused_ordering(367) 00:15:13.295 fused_ordering(368) 00:15:13.295 fused_ordering(369) 00:15:13.295 fused_ordering(370) 00:15:13.295 fused_ordering(371) 00:15:13.295 fused_ordering(372) 00:15:13.295 fused_ordering(373) 00:15:13.295 fused_ordering(374) 00:15:13.295 fused_ordering(375) 00:15:13.295 fused_ordering(376) 00:15:13.295 fused_ordering(377) 00:15:13.295 fused_ordering(378) 00:15:13.295 fused_ordering(379) 00:15:13.295 fused_ordering(380) 00:15:13.295 fused_ordering(381) 00:15:13.295 fused_ordering(382) 00:15:13.295 fused_ordering(383) 00:15:13.295 fused_ordering(384) 00:15:13.295 fused_ordering(385) 00:15:13.295 fused_ordering(386) 00:15:13.295 fused_ordering(387) 00:15:13.295 fused_ordering(388) 00:15:13.295 fused_ordering(389) 00:15:13.295 fused_ordering(390) 00:15:13.295 fused_ordering(391) 00:15:13.295 fused_ordering(392) 00:15:13.295 fused_ordering(393) 00:15:13.295 fused_ordering(394) 00:15:13.295 fused_ordering(395) 00:15:13.295 fused_ordering(396) 00:15:13.295 fused_ordering(397) 00:15:13.295 fused_ordering(398) 00:15:13.295 fused_ordering(399) 00:15:13.295 fused_ordering(400) 00:15:13.295 fused_ordering(401) 00:15:13.295 fused_ordering(402) 00:15:13.295 fused_ordering(403) 00:15:13.295 fused_ordering(404) 00:15:13.295 fused_ordering(405) 00:15:13.295 fused_ordering(406) 00:15:13.295 fused_ordering(407) 00:15:13.295 fused_ordering(408) 00:15:13.295 fused_ordering(409) 00:15:13.295 fused_ordering(410) 00:15:13.867 fused_ordering(411) 00:15:13.867 fused_ordering(412) 00:15:13.867 fused_ordering(413) 00:15:13.867 fused_ordering(414) 00:15:13.867 fused_ordering(415) 00:15:13.867 fused_ordering(416) 00:15:13.867 fused_ordering(417) 00:15:13.867 fused_ordering(418) 00:15:13.867 fused_ordering(419) 00:15:13.867 fused_ordering(420) 00:15:13.867 fused_ordering(421) 00:15:13.867 fused_ordering(422) 00:15:13.867 fused_ordering(423) 00:15:13.867 fused_ordering(424) 00:15:13.867 fused_ordering(425) 00:15:13.867 fused_ordering(426) 00:15:13.867 fused_ordering(427) 00:15:13.867 fused_ordering(428) 00:15:13.867 fused_ordering(429) 00:15:13.867 fused_ordering(430) 00:15:13.867 fused_ordering(431) 00:15:13.867 fused_ordering(432) 00:15:13.867 fused_ordering(433) 00:15:13.867 fused_ordering(434) 00:15:13.867 fused_ordering(435) 00:15:13.867 fused_ordering(436) 00:15:13.867 fused_ordering(437) 00:15:13.867 fused_ordering(438) 00:15:13.867 fused_ordering(439) 00:15:13.867 fused_ordering(440) 00:15:13.867 fused_ordering(441) 00:15:13.867 fused_ordering(442) 00:15:13.867 fused_ordering(443) 00:15:13.867 fused_ordering(444) 00:15:13.867 fused_ordering(445) 00:15:13.867 fused_ordering(446) 00:15:13.867 fused_ordering(447) 00:15:13.867 fused_ordering(448) 00:15:13.867 fused_ordering(449) 00:15:13.867 fused_ordering(450) 00:15:13.867 fused_ordering(451) 00:15:13.867 fused_ordering(452) 00:15:13.867 fused_ordering(453) 00:15:13.867 fused_ordering(454) 00:15:13.867 fused_ordering(455) 00:15:13.867 fused_ordering(456) 00:15:13.867 fused_ordering(457) 00:15:13.867 fused_ordering(458) 00:15:13.867 fused_ordering(459) 00:15:13.867 fused_ordering(460) 00:15:13.867 fused_ordering(461) 00:15:13.867 fused_ordering(462) 00:15:13.867 fused_ordering(463) 00:15:13.867 fused_ordering(464) 00:15:13.867 fused_ordering(465) 00:15:13.867 fused_ordering(466) 00:15:13.867 fused_ordering(467) 00:15:13.867 fused_ordering(468) 00:15:13.867 fused_ordering(469) 00:15:13.867 fused_ordering(470) 00:15:13.867 fused_ordering(471) 00:15:13.867 fused_ordering(472) 00:15:13.867 fused_ordering(473) 00:15:13.867 fused_ordering(474) 00:15:13.867 fused_ordering(475) 00:15:13.867 fused_ordering(476) 00:15:13.867 fused_ordering(477) 00:15:13.867 fused_ordering(478) 00:15:13.867 fused_ordering(479) 00:15:13.867 fused_ordering(480) 00:15:13.867 fused_ordering(481) 00:15:13.867 fused_ordering(482) 00:15:13.867 fused_ordering(483) 00:15:13.867 fused_ordering(484) 00:15:13.867 fused_ordering(485) 00:15:13.867 fused_ordering(486) 00:15:13.867 fused_ordering(487) 00:15:13.867 fused_ordering(488) 00:15:13.867 fused_ordering(489) 00:15:13.867 fused_ordering(490) 00:15:13.867 fused_ordering(491) 00:15:13.867 fused_ordering(492) 00:15:13.867 fused_ordering(493) 00:15:13.867 fused_ordering(494) 00:15:13.867 fused_ordering(495) 00:15:13.867 fused_ordering(496) 00:15:13.867 fused_ordering(497) 00:15:13.867 fused_ordering(498) 00:15:13.867 fused_ordering(499) 00:15:13.867 fused_ordering(500) 00:15:13.867 fused_ordering(501) 00:15:13.867 fused_ordering(502) 00:15:13.867 fused_ordering(503) 00:15:13.867 fused_ordering(504) 00:15:13.867 fused_ordering(505) 00:15:13.867 fused_ordering(506) 00:15:13.867 fused_ordering(507) 00:15:13.867 fused_ordering(508) 00:15:13.867 fused_ordering(509) 00:15:13.867 fused_ordering(510) 00:15:13.867 fused_ordering(511) 00:15:13.867 fused_ordering(512) 00:15:13.867 fused_ordering(513) 00:15:13.867 fused_ordering(514) 00:15:13.867 fused_ordering(515) 00:15:13.867 fused_ordering(516) 00:15:13.867 fused_ordering(517) 00:15:13.867 fused_ordering(518) 00:15:13.867 fused_ordering(519) 00:15:13.867 fused_ordering(520) 00:15:13.867 fused_ordering(521) 00:15:13.867 fused_ordering(522) 00:15:13.867 fused_ordering(523) 00:15:13.867 fused_ordering(524) 00:15:13.867 fused_ordering(525) 00:15:13.867 fused_ordering(526) 00:15:13.867 fused_ordering(527) 00:15:13.867 fused_ordering(528) 00:15:13.867 fused_ordering(529) 00:15:13.867 fused_ordering(530) 00:15:13.867 fused_ordering(531) 00:15:13.867 fused_ordering(532) 00:15:13.867 fused_ordering(533) 00:15:13.867 fused_ordering(534) 00:15:13.867 fused_ordering(535) 00:15:13.867 fused_ordering(536) 00:15:13.867 fused_ordering(537) 00:15:13.867 fused_ordering(538) 00:15:13.867 fused_ordering(539) 00:15:13.867 fused_ordering(540) 00:15:13.867 fused_ordering(541) 00:15:13.867 fused_ordering(542) 00:15:13.867 fused_ordering(543) 00:15:13.867 fused_ordering(544) 00:15:13.867 fused_ordering(545) 00:15:13.867 fused_ordering(546) 00:15:13.867 fused_ordering(547) 00:15:13.867 fused_ordering(548) 00:15:13.867 fused_ordering(549) 00:15:13.867 fused_ordering(550) 00:15:13.867 fused_ordering(551) 00:15:13.867 fused_ordering(552) 00:15:13.867 fused_ordering(553) 00:15:13.867 fused_ordering(554) 00:15:13.867 fused_ordering(555) 00:15:13.867 fused_ordering(556) 00:15:13.867 fused_ordering(557) 00:15:13.867 fused_ordering(558) 00:15:13.867 fused_ordering(559) 00:15:13.867 fused_ordering(560) 00:15:13.867 fused_ordering(561) 00:15:13.867 fused_ordering(562) 00:15:13.867 fused_ordering(563) 00:15:13.867 fused_ordering(564) 00:15:13.867 fused_ordering(565) 00:15:13.867 fused_ordering(566) 00:15:13.867 fused_ordering(567) 00:15:13.867 fused_ordering(568) 00:15:13.867 fused_ordering(569) 00:15:13.867 fused_ordering(570) 00:15:13.867 fused_ordering(571) 00:15:13.867 fused_ordering(572) 00:15:13.867 fused_ordering(573) 00:15:13.867 fused_ordering(574) 00:15:13.867 fused_ordering(575) 00:15:13.867 fused_ordering(576) 00:15:13.867 fused_ordering(577) 00:15:13.867 fused_ordering(578) 00:15:13.867 fused_ordering(579) 00:15:13.867 fused_ordering(580) 00:15:13.867 fused_ordering(581) 00:15:13.867 fused_ordering(582) 00:15:13.867 fused_ordering(583) 00:15:13.867 fused_ordering(584) 00:15:13.867 fused_ordering(585) 00:15:13.867 fused_ordering(586) 00:15:13.867 fused_ordering(587) 00:15:13.867 fused_ordering(588) 00:15:13.867 fused_ordering(589) 00:15:13.867 fused_ordering(590) 00:15:13.867 fused_ordering(591) 00:15:13.867 fused_ordering(592) 00:15:13.867 fused_ordering(593) 00:15:13.867 fused_ordering(594) 00:15:13.867 fused_ordering(595) 00:15:13.867 fused_ordering(596) 00:15:13.867 fused_ordering(597) 00:15:13.867 fused_ordering(598) 00:15:13.867 fused_ordering(599) 00:15:13.867 fused_ordering(600) 00:15:13.867 fused_ordering(601) 00:15:13.867 fused_ordering(602) 00:15:13.867 fused_ordering(603) 00:15:13.867 fused_ordering(604) 00:15:13.867 fused_ordering(605) 00:15:13.867 fused_ordering(606) 00:15:13.867 fused_ordering(607) 00:15:13.867 fused_ordering(608) 00:15:13.867 fused_ordering(609) 00:15:13.867 fused_ordering(610) 00:15:13.867 fused_ordering(611) 00:15:13.867 fused_ordering(612) 00:15:13.867 fused_ordering(613) 00:15:13.867 fused_ordering(614) 00:15:13.867 fused_ordering(615) 00:15:14.810 fused_ordering(616) 00:15:14.810 fused_ordering(617) 00:15:14.810 fused_ordering(618) 00:15:14.810 fused_ordering(619) 00:15:14.810 fused_ordering(620) 00:15:14.810 fused_ordering(621) 00:15:14.810 fused_ordering(622) 00:15:14.810 fused_ordering(623) 00:15:14.810 fused_ordering(624) 00:15:14.810 fused_ordering(625) 00:15:14.810 fused_ordering(626) 00:15:14.810 fused_ordering(627) 00:15:14.810 fused_ordering(628) 00:15:14.810 fused_ordering(629) 00:15:14.810 fused_ordering(630) 00:15:14.810 fused_ordering(631) 00:15:14.810 fused_ordering(632) 00:15:14.810 fused_ordering(633) 00:15:14.810 fused_ordering(634) 00:15:14.810 fused_ordering(635) 00:15:14.810 fused_ordering(636) 00:15:14.810 fused_ordering(637) 00:15:14.810 fused_ordering(638) 00:15:14.810 fused_ordering(639) 00:15:14.810 fused_ordering(640) 00:15:14.810 fused_ordering(641) 00:15:14.810 fused_ordering(642) 00:15:14.810 fused_ordering(643) 00:15:14.810 fused_ordering(644) 00:15:14.810 fused_ordering(645) 00:15:14.810 fused_ordering(646) 00:15:14.810 fused_ordering(647) 00:15:14.810 fused_ordering(648) 00:15:14.810 fused_ordering(649) 00:15:14.810 fused_ordering(650) 00:15:14.810 fused_ordering(651) 00:15:14.810 fused_ordering(652) 00:15:14.810 fused_ordering(653) 00:15:14.810 fused_ordering(654) 00:15:14.810 fused_ordering(655) 00:15:14.810 fused_ordering(656) 00:15:14.810 fused_ordering(657) 00:15:14.810 fused_ordering(658) 00:15:14.810 fused_ordering(659) 00:15:14.810 fused_ordering(660) 00:15:14.810 fused_ordering(661) 00:15:14.810 fused_ordering(662) 00:15:14.810 fused_ordering(663) 00:15:14.810 fused_ordering(664) 00:15:14.810 fused_ordering(665) 00:15:14.810 fused_ordering(666) 00:15:14.810 fused_ordering(667) 00:15:14.810 fused_ordering(668) 00:15:14.810 fused_ordering(669) 00:15:14.810 fused_ordering(670) 00:15:14.810 fused_ordering(671) 00:15:14.810 fused_ordering(672) 00:15:14.810 fused_ordering(673) 00:15:14.810 fused_ordering(674) 00:15:14.810 fused_ordering(675) 00:15:14.810 fused_ordering(676) 00:15:14.810 fused_ordering(677) 00:15:14.810 fused_ordering(678) 00:15:14.810 fused_ordering(679) 00:15:14.810 fused_ordering(680) 00:15:14.810 fused_ordering(681) 00:15:14.810 fused_ordering(682) 00:15:14.810 fused_ordering(683) 00:15:14.810 fused_ordering(684) 00:15:14.810 fused_ordering(685) 00:15:14.810 fused_ordering(686) 00:15:14.811 fused_ordering(687) 00:15:14.811 fused_ordering(688) 00:15:14.811 fused_ordering(689) 00:15:14.811 fused_ordering(690) 00:15:14.811 fused_ordering(691) 00:15:14.811 fused_ordering(692) 00:15:14.811 fused_ordering(693) 00:15:14.811 fused_ordering(694) 00:15:14.811 fused_ordering(695) 00:15:14.811 fused_ordering(696) 00:15:14.811 fused_ordering(697) 00:15:14.811 fused_ordering(698) 00:15:14.811 fused_ordering(699) 00:15:14.811 fused_ordering(700) 00:15:14.811 fused_ordering(701) 00:15:14.811 fused_ordering(702) 00:15:14.811 fused_ordering(703) 00:15:14.811 fused_ordering(704) 00:15:14.811 fused_ordering(705) 00:15:14.811 fused_ordering(706) 00:15:14.811 fused_ordering(707) 00:15:14.811 fused_ordering(708) 00:15:14.811 fused_ordering(709) 00:15:14.811 fused_ordering(710) 00:15:14.811 fused_ordering(711) 00:15:14.811 fused_ordering(712) 00:15:14.811 fused_ordering(713) 00:15:14.811 fused_ordering(714) 00:15:14.811 fused_ordering(715) 00:15:14.811 fused_ordering(716) 00:15:14.811 fused_ordering(717) 00:15:14.811 fused_ordering(718) 00:15:14.811 fused_ordering(719) 00:15:14.811 fused_ordering(720) 00:15:14.811 fused_ordering(721) 00:15:14.811 fused_ordering(722) 00:15:14.811 fused_ordering(723) 00:15:14.811 fused_ordering(724) 00:15:14.811 fused_ordering(725) 00:15:14.811 fused_ordering(726) 00:15:14.811 fused_ordering(727) 00:15:14.811 fused_ordering(728) 00:15:14.811 fused_ordering(729) 00:15:14.811 fused_ordering(730) 00:15:14.811 fused_ordering(731) 00:15:14.811 fused_ordering(732) 00:15:14.811 fused_ordering(733) 00:15:14.811 fused_ordering(734) 00:15:14.811 fused_ordering(735) 00:15:14.811 fused_ordering(736) 00:15:14.811 fused_ordering(737) 00:15:14.811 fused_ordering(738) 00:15:14.811 fused_ordering(739) 00:15:14.811 fused_ordering(740) 00:15:14.811 fused_ordering(741) 00:15:14.811 fused_ordering(742) 00:15:14.811 fused_ordering(743) 00:15:14.811 fused_ordering(744) 00:15:14.811 fused_ordering(745) 00:15:14.811 fused_ordering(746) 00:15:14.811 fused_ordering(747) 00:15:14.811 fused_ordering(748) 00:15:14.811 fused_ordering(749) 00:15:14.811 fused_ordering(750) 00:15:14.811 fused_ordering(751) 00:15:14.811 fused_ordering(752) 00:15:14.811 fused_ordering(753) 00:15:14.811 fused_ordering(754) 00:15:14.811 fused_ordering(755) 00:15:14.811 fused_ordering(756) 00:15:14.811 fused_ordering(757) 00:15:14.811 fused_ordering(758) 00:15:14.811 fused_ordering(759) 00:15:14.811 fused_ordering(760) 00:15:14.811 fused_ordering(761) 00:15:14.811 fused_ordering(762) 00:15:14.811 fused_ordering(763) 00:15:14.811 fused_ordering(764) 00:15:14.811 fused_ordering(765) 00:15:14.811 fused_ordering(766) 00:15:14.811 fused_ordering(767) 00:15:14.811 fused_ordering(768) 00:15:14.811 fused_ordering(769) 00:15:14.811 fused_ordering(770) 00:15:14.811 fused_ordering(771) 00:15:14.811 fused_ordering(772) 00:15:14.811 fused_ordering(773) 00:15:14.811 fused_ordering(774) 00:15:14.811 fused_ordering(775) 00:15:14.811 fused_ordering(776) 00:15:14.811 fused_ordering(777) 00:15:14.811 fused_ordering(778) 00:15:14.811 fused_ordering(779) 00:15:14.811 fused_ordering(780) 00:15:14.811 fused_ordering(781) 00:15:14.811 fused_ordering(782) 00:15:14.811 fused_ordering(783) 00:15:14.811 fused_ordering(784) 00:15:14.811 fused_ordering(785) 00:15:14.811 fused_ordering(786) 00:15:14.811 fused_ordering(787) 00:15:14.811 fused_ordering(788) 00:15:14.811 fused_ordering(789) 00:15:14.811 fused_ordering(790) 00:15:14.811 fused_ordering(791) 00:15:14.811 fused_ordering(792) 00:15:14.811 fused_ordering(793) 00:15:14.811 fused_ordering(794) 00:15:14.811 fused_ordering(795) 00:15:14.811 fused_ordering(796) 00:15:14.811 fused_ordering(797) 00:15:14.811 fused_ordering(798) 00:15:14.811 fused_ordering(799) 00:15:14.811 fused_ordering(800) 00:15:14.811 fused_ordering(801) 00:15:14.811 fused_ordering(802) 00:15:14.811 fused_ordering(803) 00:15:14.811 fused_ordering(804) 00:15:14.811 fused_ordering(805) 00:15:14.811 fused_ordering(806) 00:15:14.811 fused_ordering(807) 00:15:14.811 fused_ordering(808) 00:15:14.811 fused_ordering(809) 00:15:14.811 fused_ordering(810) 00:15:14.811 fused_ordering(811) 00:15:14.811 fused_ordering(812) 00:15:14.811 fused_ordering(813) 00:15:14.811 fused_ordering(814) 00:15:14.811 fused_ordering(815) 00:15:14.811 fused_ordering(816) 00:15:14.811 fused_ordering(817) 00:15:14.811 fused_ordering(818) 00:15:14.811 fused_ordering(819) 00:15:14.811 fused_ordering(820) 00:15:15.383 fused_ordering(821) 00:15:15.383 fused_ordering(822) 00:15:15.383 fused_ordering(823) 00:15:15.383 fused_ordering(824) 00:15:15.383 fused_ordering(825) 00:15:15.383 fused_ordering(826) 00:15:15.383 fused_ordering(827) 00:15:15.383 fused_ordering(828) 00:15:15.383 fused_ordering(829) 00:15:15.383 fused_ordering(830) 00:15:15.383 fused_ordering(831) 00:15:15.383 fused_ordering(832) 00:15:15.383 fused_ordering(833) 00:15:15.383 fused_ordering(834) 00:15:15.383 fused_ordering(835) 00:15:15.383 fused_ordering(836) 00:15:15.383 fused_ordering(837) 00:15:15.383 fused_ordering(838) 00:15:15.383 fused_ordering(839) 00:15:15.383 fused_ordering(840) 00:15:15.383 fused_ordering(841) 00:15:15.383 fused_ordering(842) 00:15:15.383 fused_ordering(843) 00:15:15.383 fused_ordering(844) 00:15:15.383 fused_ordering(845) 00:15:15.383 fused_ordering(846) 00:15:15.383 fused_ordering(847) 00:15:15.383 fused_ordering(848) 00:15:15.383 fused_ordering(849) 00:15:15.383 fused_ordering(850) 00:15:15.383 fused_ordering(851) 00:15:15.383 fused_ordering(852) 00:15:15.383 fused_ordering(853) 00:15:15.383 fused_ordering(854) 00:15:15.383 fused_ordering(855) 00:15:15.383 fused_ordering(856) 00:15:15.383 fused_ordering(857) 00:15:15.383 fused_ordering(858) 00:15:15.383 fused_ordering(859) 00:15:15.383 fused_ordering(860) 00:15:15.383 fused_ordering(861) 00:15:15.383 fused_ordering(862) 00:15:15.383 fused_ordering(863) 00:15:15.383 fused_ordering(864) 00:15:15.383 fused_ordering(865) 00:15:15.383 fused_ordering(866) 00:15:15.383 fused_ordering(867) 00:15:15.383 fused_ordering(868) 00:15:15.383 fused_ordering(869) 00:15:15.383 fused_ordering(870) 00:15:15.383 fused_ordering(871) 00:15:15.383 fused_ordering(872) 00:15:15.383 fused_ordering(873) 00:15:15.383 fused_ordering(874) 00:15:15.383 fused_ordering(875) 00:15:15.383 fused_ordering(876) 00:15:15.383 fused_ordering(877) 00:15:15.383 fused_ordering(878) 00:15:15.383 fused_ordering(879) 00:15:15.383 fused_ordering(880) 00:15:15.383 fused_ordering(881) 00:15:15.383 fused_ordering(882) 00:15:15.383 fused_ordering(883) 00:15:15.383 fused_ordering(884) 00:15:15.383 fused_ordering(885) 00:15:15.383 fused_ordering(886) 00:15:15.383 fused_ordering(887) 00:15:15.383 fused_ordering(888) 00:15:15.383 fused_ordering(889) 00:15:15.383 fused_ordering(890) 00:15:15.383 fused_ordering(891) 00:15:15.383 fused_ordering(892) 00:15:15.383 fused_ordering(893) 00:15:15.383 fused_ordering(894) 00:15:15.383 fused_ordering(895) 00:15:15.383 fused_ordering(896) 00:15:15.383 fused_ordering(897) 00:15:15.383 fused_ordering(898) 00:15:15.383 fused_ordering(899) 00:15:15.383 fused_ordering(900) 00:15:15.383 fused_ordering(901) 00:15:15.383 fused_ordering(902) 00:15:15.383 fused_ordering(903) 00:15:15.383 fused_ordering(904) 00:15:15.383 fused_ordering(905) 00:15:15.383 fused_ordering(906) 00:15:15.383 fused_ordering(907) 00:15:15.383 fused_ordering(908) 00:15:15.383 fused_ordering(909) 00:15:15.383 fused_ordering(910) 00:15:15.383 fused_ordering(911) 00:15:15.383 fused_ordering(912) 00:15:15.383 fused_ordering(913) 00:15:15.383 fused_ordering(914) 00:15:15.383 fused_ordering(915) 00:15:15.383 fused_ordering(916) 00:15:15.383 fused_ordering(917) 00:15:15.383 fused_ordering(918) 00:15:15.383 fused_ordering(919) 00:15:15.383 fused_ordering(920) 00:15:15.383 fused_ordering(921) 00:15:15.383 fused_ordering(922) 00:15:15.383 fused_ordering(923) 00:15:15.383 fused_ordering(924) 00:15:15.383 fused_ordering(925) 00:15:15.383 fused_ordering(926) 00:15:15.383 fused_ordering(927) 00:15:15.383 fused_ordering(928) 00:15:15.383 fused_ordering(929) 00:15:15.383 fused_ordering(930) 00:15:15.383 fused_ordering(931) 00:15:15.383 fused_ordering(932) 00:15:15.383 fused_ordering(933) 00:15:15.383 fused_ordering(934) 00:15:15.383 fused_ordering(935) 00:15:15.383 fused_ordering(936) 00:15:15.383 fused_ordering(937) 00:15:15.383 fused_ordering(938) 00:15:15.383 fused_ordering(939) 00:15:15.383 fused_ordering(940) 00:15:15.383 fused_ordering(941) 00:15:15.383 fused_ordering(942) 00:15:15.383 fused_ordering(943) 00:15:15.383 fused_ordering(944) 00:15:15.383 fused_ordering(945) 00:15:15.383 fused_ordering(946) 00:15:15.383 fused_ordering(947) 00:15:15.383 fused_ordering(948) 00:15:15.383 fused_ordering(949) 00:15:15.383 fused_ordering(950) 00:15:15.384 fused_ordering(951) 00:15:15.384 fused_ordering(952) 00:15:15.384 fused_ordering(953) 00:15:15.384 fused_ordering(954) 00:15:15.384 fused_ordering(955) 00:15:15.384 fused_ordering(956) 00:15:15.384 fused_ordering(957) 00:15:15.384 fused_ordering(958) 00:15:15.384 fused_ordering(959) 00:15:15.384 fused_ordering(960) 00:15:15.384 fused_ordering(961) 00:15:15.384 fused_ordering(962) 00:15:15.384 fused_ordering(963) 00:15:15.384 fused_ordering(964) 00:15:15.384 fused_ordering(965) 00:15:15.384 fused_ordering(966) 00:15:15.384 fused_ordering(967) 00:15:15.384 fused_ordering(968) 00:15:15.384 fused_ordering(969) 00:15:15.384 fused_ordering(970) 00:15:15.384 fused_ordering(971) 00:15:15.384 fused_ordering(972) 00:15:15.384 fused_ordering(973) 00:15:15.384 fused_ordering(974) 00:15:15.384 fused_ordering(975) 00:15:15.384 fused_ordering(976) 00:15:15.384 fused_ordering(977) 00:15:15.384 fused_ordering(978) 00:15:15.384 fused_ordering(979) 00:15:15.384 fused_ordering(980) 00:15:15.384 fused_ordering(981) 00:15:15.384 fused_ordering(982) 00:15:15.384 fused_ordering(983) 00:15:15.384 fused_ordering(984) 00:15:15.384 fused_ordering(985) 00:15:15.384 fused_ordering(986) 00:15:15.384 fused_ordering(987) 00:15:15.384 fused_ordering(988) 00:15:15.384 fused_ordering(989) 00:15:15.384 fused_ordering(990) 00:15:15.384 fused_ordering(991) 00:15:15.384 fused_ordering(992) 00:15:15.384 fused_ordering(993) 00:15:15.384 fused_ordering(994) 00:15:15.384 fused_ordering(995) 00:15:15.384 fused_ordering(996) 00:15:15.384 fused_ordering(997) 00:15:15.384 fused_ordering(998) 00:15:15.384 fused_ordering(999) 00:15:15.384 fused_ordering(1000) 00:15:15.384 fused_ordering(1001) 00:15:15.384 fused_ordering(1002) 00:15:15.384 fused_ordering(1003) 00:15:15.384 fused_ordering(1004) 00:15:15.384 fused_ordering(1005) 00:15:15.384 fused_ordering(1006) 00:15:15.384 fused_ordering(1007) 00:15:15.384 fused_ordering(1008) 00:15:15.384 fused_ordering(1009) 00:15:15.384 fused_ordering(1010) 00:15:15.384 fused_ordering(1011) 00:15:15.384 fused_ordering(1012) 00:15:15.384 fused_ordering(1013) 00:15:15.384 fused_ordering(1014) 00:15:15.384 fused_ordering(1015) 00:15:15.384 fused_ordering(1016) 00:15:15.384 fused_ordering(1017) 00:15:15.384 fused_ordering(1018) 00:15:15.384 fused_ordering(1019) 00:15:15.384 fused_ordering(1020) 00:15:15.384 fused_ordering(1021) 00:15:15.384 fused_ordering(1022) 00:15:15.384 fused_ordering(1023) 00:15:15.384 21:10:53 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:15.384 21:10:53 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:15.384 21:10:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:15.384 21:10:53 -- nvmf/common.sh@116 -- # sync 00:15:15.384 21:10:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.384 21:10:53 -- nvmf/common.sh@119 -- # set +e 00:15:15.384 21:10:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.384 21:10:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.384 rmmod nvme_tcp 00:15:15.384 rmmod nvme_fabrics 00:15:15.384 rmmod nvme_keyring 00:15:15.384 21:10:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.384 21:10:53 -- nvmf/common.sh@123 -- # set -e 00:15:15.384 21:10:53 -- nvmf/common.sh@124 -- # return 0 00:15:15.384 21:10:53 -- nvmf/common.sh@477 -- # '[' -n 2309633 ']' 00:15:15.384 21:10:53 -- nvmf/common.sh@478 -- # killprocess 2309633 00:15:15.384 21:10:53 -- common/autotest_common.sh@926 -- # '[' -z 2309633 ']' 00:15:15.384 21:10:53 -- common/autotest_common.sh@930 -- # kill -0 2309633 00:15:15.384 21:10:53 -- common/autotest_common.sh@931 -- # uname 00:15:15.384 21:10:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.384 21:10:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2309633 00:15:15.645 21:10:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:15.645 21:10:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:15.645 21:10:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2309633' 00:15:15.646 killing process with pid 2309633 00:15:15.646 21:10:53 -- common/autotest_common.sh@945 -- # kill 2309633 00:15:15.646 21:10:53 -- common/autotest_common.sh@950 -- # wait 2309633 00:15:15.646 21:10:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.646 21:10:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.646 21:10:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.646 21:10:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.646 21:10:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.646 21:10:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.646 21:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.646 21:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.566 21:10:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:17.828 00:15:17.828 real 0m13.619s 00:15:17.828 user 0m7.590s 00:15:17.828 sys 0m7.397s 00:15:17.828 21:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.828 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:15:17.828 ************************************ 00:15:17.828 END TEST nvmf_fused_ordering 00:15:17.828 ************************************ 00:15:17.828 21:10:55 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:17.828 21:10:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.828 21:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.828 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:15:17.828 ************************************ 00:15:17.828 START TEST nvmf_delete_subsystem 00:15:17.828 ************************************ 00:15:17.828 21:10:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:17.828 * Looking for test storage... 00:15:17.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.828 21:10:55 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.828 21:10:55 -- nvmf/common.sh@7 -- # uname -s 00:15:17.828 21:10:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.828 21:10:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.828 21:10:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.828 21:10:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.828 21:10:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.828 21:10:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.828 21:10:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.828 21:10:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.828 21:10:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.828 21:10:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.828 21:10:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.828 21:10:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.828 21:10:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.828 21:10:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.828 21:10:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.828 21:10:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.828 21:10:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.828 21:10:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.828 21:10:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.828 21:10:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 21:10:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 21:10:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 21:10:55 -- paths/export.sh@5 -- # export PATH 00:15:17.828 21:10:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 21:10:55 -- nvmf/common.sh@46 -- # : 0 00:15:17.828 21:10:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.828 21:10:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.828 21:10:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.828 21:10:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.828 21:10:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.828 21:10:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.828 21:10:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.828 21:10:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.828 21:10:55 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:17.828 21:10:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:17.828 21:10:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.828 21:10:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:17.828 21:10:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:17.829 21:10:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:17.829 21:10:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.829 21:10:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.829 21:10:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.829 21:10:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:17.829 21:10:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:17.829 21:10:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:17.829 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:15:25.976 21:11:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.976 21:11:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:25.976 21:11:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:25.976 21:11:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:25.976 21:11:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:25.976 21:11:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:25.976 21:11:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:25.976 21:11:02 -- nvmf/common.sh@294 -- # net_devs=() 00:15:25.976 21:11:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:25.976 21:11:02 -- nvmf/common.sh@295 -- # e810=() 00:15:25.976 21:11:02 -- nvmf/common.sh@295 -- # local -ga e810 00:15:25.976 21:11:02 -- nvmf/common.sh@296 -- # x722=() 00:15:25.976 21:11:02 -- nvmf/common.sh@296 -- # local -ga x722 00:15:25.976 21:11:02 -- nvmf/common.sh@297 -- # mlx=() 00:15:25.976 21:11:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:25.976 21:11:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.976 21:11:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.976 21:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:25.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:25.976 21:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.976 21:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:25.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:25.976 21:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.976 21:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.976 21:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.976 21:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:25.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:25.976 21:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.976 21:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.976 21:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.976 21:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:25.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:25.976 21:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:25.976 21:11:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:25.976 21:11:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.976 21:11:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.976 21:11:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:25.976 21:11:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.976 21:11:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.976 21:11:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:25.976 21:11:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.976 21:11:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.976 21:11:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:25.976 21:11:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:25.976 21:11:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.976 21:11:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.976 21:11:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.976 21:11:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.976 21:11:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:25.976 21:11:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.976 21:11:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.976 21:11:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.976 21:11:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:25.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:15:25.976 00:15:25.976 --- 10.0.0.2 ping statistics --- 00:15:25.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.976 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:15:25.976 21:11:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:15:25.976 00:15:25.976 --- 10.0.0.1 ping statistics --- 00:15:25.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.976 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:15:25.976 21:11:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.976 21:11:02 -- nvmf/common.sh@410 -- # return 0 00:15:25.976 21:11:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.976 21:11:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.976 21:11:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.976 21:11:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.976 21:11:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.976 21:11:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.976 21:11:02 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:25.976 21:11:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.976 21:11:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.976 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:15:25.976 21:11:02 -- nvmf/common.sh@469 -- # nvmfpid=2314670 00:15:25.976 21:11:02 -- nvmf/common.sh@470 -- # waitforlisten 2314670 00:15:25.976 21:11:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:25.976 21:11:02 -- common/autotest_common.sh@819 -- # '[' -z 2314670 ']' 00:15:25.976 21:11:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.976 21:11:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.976 21:11:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.976 21:11:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.976 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:15:25.976 [2024-06-08 21:11:02.982635] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:25.976 [2024-06-08 21:11:02.982697] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.976 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.976 [2024-06-08 21:11:03.052032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.977 [2024-06-08 21:11:03.124395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.977 [2024-06-08 21:11:03.124525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.977 [2024-06-08 21:11:03.124534] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.977 [2024-06-08 21:11:03.124542] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.977 [2024-06-08 21:11:03.124678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.977 [2024-06-08 21:11:03.124680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.977 21:11:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.977 21:11:03 -- common/autotest_common.sh@852 -- # return 0 00:15:25.977 21:11:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:25.977 21:11:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 21:11:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 [2024-06-08 21:11:03.792040] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 [2024-06-08 21:11:03.808196] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 NULL1 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 Delay0 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.977 21:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.977 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.977 21:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@28 -- # perf_pid=2314709 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:25.977 21:11:03 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:25.977 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.977 [2024-06-08 21:11:03.892848] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:27.887 21:11:05 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.887 21:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.887 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.887 starting I/O failed: -6 00:15:27.887 Write completed with error (sct=0, sc=8) 00:15:27.887 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 starting I/O failed: -6 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 starting I/O failed: -6 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 starting I/O failed: -6 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 starting I/O failed: -6 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 starting I/O failed: -6 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 [2024-06-08 21:11:05.977615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd80e40 is same with the state(5) to be set 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:27.888 Write completed with error (sct=0, sc=8) 00:15:27.888 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 starting I/O failed: -6 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 [2024-06-08 21:11:05.981129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0bc000c00 is same with the state(5) to be set 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Write completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:28.149 Read completed with error (sct=0, sc=8) 00:15:29.098 [2024-06-08 21:11:06.950499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f910 is same with the state(5) to be set 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Read completed with error (sct=0, sc=8) 00:15:29.098 Write completed with error (sct=0, sc=8) 00:15:29.099 [2024-06-08 21:11:06.981518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78640 is same with the state(5) to be set 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 [2024-06-08 21:11:06.981666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78ba0 is same with the state(5) to be set 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 [2024-06-08 21:11:06.982274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0bc00bf20 is same with the state(5) to be set 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Write completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 Read completed with error (sct=0, sc=8) 00:15:29.099 [2024-06-08 21:11:06.983775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0bc00c600 is same with the state(5) to be set 00:15:29.099 [2024-06-08 21:11:06.984328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6f910 (9): Bad file descriptor 00:15:29.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:29.099 Initializing NVMe Controllers 00:15:29.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.099 Controller IO queue size 128, less than required. 00:15:29.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:29.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:29.099 Initialization complete. Launching workers. 00:15:29.099 ======================================================== 00:15:29.099 Latency(us) 00:15:29.099 Device Information : IOPS MiB/s Average min max 00:15:29.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.78 0.08 890319.55 315.68 1006714.94 00:15:29.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.85 0.08 927778.19 272.62 1009485.22 00:15:29.099 ======================================================== 00:15:29.099 Total : 327.63 0.16 908138.02 272.62 1009485.22 00:15:29.099 00:15:29.099 21:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.099 21:11:06 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:29.099 21:11:06 -- target/delete_subsystem.sh@35 -- # kill -0 2314709 00:15:29.099 21:11:06 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:29.422 21:11:07 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:29.422 21:11:07 -- target/delete_subsystem.sh@35 -- # kill -0 2314709 00:15:29.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2314709) - No such process 00:15:29.422 21:11:07 -- target/delete_subsystem.sh@45 -- # NOT wait 2314709 00:15:29.422 21:11:07 -- common/autotest_common.sh@640 -- # local es=0 00:15:29.422 21:11:07 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2314709 00:15:29.422 21:11:07 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:29.422 21:11:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:29.422 21:11:07 -- common/autotest_common.sh@632 -- # type -t wait 00:15:29.422 21:11:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:29.422 21:11:07 -- common/autotest_common.sh@643 -- # wait 2314709 00:15:29.422 21:11:07 -- common/autotest_common.sh@643 -- # es=1 00:15:29.422 21:11:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:29.422 21:11:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:29.422 21:11:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:29.422 21:11:07 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.422 21:11:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.422 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:15:29.683 21:11:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.683 21:11:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.683 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:15:29.683 [2024-06-08 21:11:07.516642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.683 21:11:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.683 21:11:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.683 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:15:29.683 21:11:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@54 -- # perf_pid=2315495 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:29.683 21:11:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.683 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.683 [2024-06-08 21:11:07.583774] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.255 21:11:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:30.255 21:11:08 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:30.255 21:11:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:30.516 21:11:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:30.516 21:11:08 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:30.516 21:11:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.087 21:11:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.087 21:11:09 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:31.087 21:11:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.659 21:11:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.659 21:11:09 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:31.659 21:11:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.231 21:11:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.231 21:11:10 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:32.231 21:11:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.491 21:11:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.491 21:11:10 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:32.491 21:11:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.753 Initializing NVMe Controllers 00:15:32.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.753 Controller IO queue size 128, less than required. 00:15:32.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:32.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:32.753 Initialization complete. Launching workers. 00:15:32.753 ======================================================== 00:15:32.753 Latency(us) 00:15:32.753 Device Information : IOPS MiB/s Average min max 00:15:32.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002914.52 1000262.25 1006941.44 00:15:32.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003389.93 1000276.73 1009352.92 00:15:32.753 ======================================================== 00:15:32.753 Total : 256.00 0.12 1003152.22 1000262.25 1009352.92 00:15:32.753 00:15:33.013 21:11:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:33.013 21:11:11 -- target/delete_subsystem.sh@57 -- # kill -0 2315495 00:15:33.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2315495) - No such process 00:15:33.013 21:11:11 -- target/delete_subsystem.sh@67 -- # wait 2315495 00:15:33.014 21:11:11 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:33.014 21:11:11 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:33.014 21:11:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.014 21:11:11 -- nvmf/common.sh@116 -- # sync 00:15:33.014 21:11:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:33.014 21:11:11 -- nvmf/common.sh@119 -- # set +e 00:15:33.014 21:11:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:33.014 21:11:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:33.014 rmmod nvme_tcp 00:15:33.014 rmmod nvme_fabrics 00:15:33.275 rmmod nvme_keyring 00:15:33.275 21:11:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:33.275 21:11:11 -- nvmf/common.sh@123 -- # set -e 00:15:33.275 21:11:11 -- nvmf/common.sh@124 -- # return 0 00:15:33.275 21:11:11 -- nvmf/common.sh@477 -- # '[' -n 2314670 ']' 00:15:33.275 21:11:11 -- nvmf/common.sh@478 -- # killprocess 2314670 00:15:33.275 21:11:11 -- common/autotest_common.sh@926 -- # '[' -z 2314670 ']' 00:15:33.275 21:11:11 -- common/autotest_common.sh@930 -- # kill -0 2314670 00:15:33.275 21:11:11 -- common/autotest_common.sh@931 -- # uname 00:15:33.275 21:11:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:33.275 21:11:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2314670 00:15:33.275 21:11:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:33.275 21:11:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:33.275 21:11:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2314670' 00:15:33.275 killing process with pid 2314670 00:15:33.275 21:11:11 -- common/autotest_common.sh@945 -- # kill 2314670 00:15:33.275 21:11:11 -- common/autotest_common.sh@950 -- # wait 2314670 00:15:33.275 21:11:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.275 21:11:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:33.275 21:11:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:33.275 21:11:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.275 21:11:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:33.275 21:11:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.275 21:11:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.275 21:11:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.823 21:11:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:35.823 00:15:35.823 real 0m17.704s 00:15:35.823 user 0m30.439s 00:15:35.823 sys 0m6.099s 00:15:35.823 21:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.823 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:15:35.823 ************************************ 00:15:35.823 END TEST nvmf_delete_subsystem 00:15:35.823 ************************************ 00:15:35.823 21:11:13 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:35.823 21:11:13 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:35.823 21:11:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:35.823 21:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:35.823 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:15:35.823 ************************************ 00:15:35.823 START TEST nvmf_nvme_cli 00:15:35.823 ************************************ 00:15:35.823 21:11:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:35.823 * Looking for test storage... 00:15:35.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.823 21:11:13 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.823 21:11:13 -- nvmf/common.sh@7 -- # uname -s 00:15:35.823 21:11:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.823 21:11:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.823 21:11:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.823 21:11:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.823 21:11:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.823 21:11:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.823 21:11:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.823 21:11:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.823 21:11:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.823 21:11:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.823 21:11:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.823 21:11:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.823 21:11:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.823 21:11:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.823 21:11:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.823 21:11:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.823 21:11:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.823 21:11:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.823 21:11:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.823 21:11:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.823 21:11:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.823 21:11:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.823 21:11:13 -- paths/export.sh@5 -- # export PATH 00:15:35.823 21:11:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.823 21:11:13 -- nvmf/common.sh@46 -- # : 0 00:15:35.823 21:11:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:35.823 21:11:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:35.823 21:11:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:35.823 21:11:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.823 21:11:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.823 21:11:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:35.823 21:11:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:35.823 21:11:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:35.823 21:11:13 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:35.823 21:11:13 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:35.823 21:11:13 -- target/nvme_cli.sh@14 -- # devs=() 00:15:35.823 21:11:13 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:35.823 21:11:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:35.823 21:11:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.823 21:11:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:35.823 21:11:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:35.823 21:11:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:35.823 21:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.823 21:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.823 21:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.823 21:11:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:35.823 21:11:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:35.823 21:11:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:35.823 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:15:42.421 21:11:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.421 21:11:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:42.421 21:11:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:42.421 21:11:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:42.421 21:11:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:42.421 21:11:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:42.421 21:11:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:42.421 21:11:20 -- nvmf/common.sh@294 -- # net_devs=() 00:15:42.421 21:11:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:42.421 21:11:20 -- nvmf/common.sh@295 -- # e810=() 00:15:42.421 21:11:20 -- nvmf/common.sh@295 -- # local -ga e810 00:15:42.421 21:11:20 -- nvmf/common.sh@296 -- # x722=() 00:15:42.421 21:11:20 -- nvmf/common.sh@296 -- # local -ga x722 00:15:42.421 21:11:20 -- nvmf/common.sh@297 -- # mlx=() 00:15:42.421 21:11:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:42.421 21:11:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.421 21:11:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:42.421 21:11:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:42.421 21:11:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.421 21:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:42.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:42.421 21:11:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.421 21:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:42.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:42.421 21:11:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.421 21:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.421 21:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.421 21:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:42.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:42.421 21:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.421 21:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.421 21:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.421 21:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.421 21:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:42.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:42.421 21:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.421 21:11:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:42.421 21:11:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:42.421 21:11:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:42.421 21:11:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.421 21:11:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.421 21:11:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.421 21:11:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:42.421 21:11:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.421 21:11:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.421 21:11:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:42.421 21:11:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.421 21:11:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.421 21:11:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:42.421 21:11:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:42.421 21:11:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.421 21:11:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.682 21:11:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.682 21:11:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.682 21:11:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:42.682 21:11:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.682 21:11:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.682 21:11:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.682 21:11:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:42.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.884 ms 00:15:42.682 00:15:42.682 --- 10.0.0.2 ping statistics --- 00:15:42.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.682 rtt min/avg/max/mdev = 0.884/0.884/0.884/0.000 ms 00:15:42.682 21:11:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:15:42.682 00:15:42.682 --- 10.0.0.1 ping statistics --- 00:15:42.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.682 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:15:42.682 21:11:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.682 21:11:20 -- nvmf/common.sh@410 -- # return 0 00:15:42.682 21:11:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.682 21:11:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.682 21:11:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.682 21:11:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.682 21:11:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.682 21:11:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.682 21:11:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.682 21:11:20 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:42.682 21:11:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.682 21:11:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.682 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 21:11:20 -- nvmf/common.sh@469 -- # nvmfpid=2320408 00:15:42.942 21:11:20 -- nvmf/common.sh@470 -- # waitforlisten 2320408 00:15:42.942 21:11:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.942 21:11:20 -- common/autotest_common.sh@819 -- # '[' -z 2320408 ']' 00:15:42.942 21:11:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.942 21:11:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.942 21:11:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.942 21:11:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.942 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 [2024-06-08 21:11:20.831529] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:42.943 [2024-06-08 21:11:20.831610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.943 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.943 [2024-06-08 21:11:20.906418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.943 [2024-06-08 21:11:20.979818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.943 [2024-06-08 21:11:20.979955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.943 [2024-06-08 21:11:20.979965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.943 [2024-06-08 21:11:20.979974] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.943 [2024-06-08 21:11:20.980115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.943 [2024-06-08 21:11:20.980231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.943 [2024-06-08 21:11:20.980388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.943 [2024-06-08 21:11:20.980389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.514 21:11:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.514 21:11:21 -- common/autotest_common.sh@852 -- # return 0 00:15:43.514 21:11:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.514 21:11:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.514 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 21:11:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.775 21:11:21 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 [2024-06-08 21:11:21.637527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 Malloc0 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 Malloc1 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 [2024-06-08 21:11:21.727458] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:43.775 21:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.775 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.775 21:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.775 21:11:21 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:43.775 00:15:43.775 Discovery Log Number of Records 2, Generation counter 2 00:15:43.775 =====Discovery Log Entry 0====== 00:15:43.775 trtype: tcp 00:15:43.775 adrfam: ipv4 00:15:43.775 subtype: current discovery subsystem 00:15:43.775 treq: not required 00:15:43.775 portid: 0 00:15:43.775 trsvcid: 4420 00:15:43.775 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:43.775 traddr: 10.0.0.2 00:15:43.775 eflags: explicit discovery connections, duplicate discovery information 00:15:43.775 sectype: none 00:15:43.775 =====Discovery Log Entry 1====== 00:15:43.776 trtype: tcp 00:15:43.776 adrfam: ipv4 00:15:43.776 subtype: nvme subsystem 00:15:43.776 treq: not required 00:15:43.776 portid: 0 00:15:43.776 trsvcid: 4420 00:15:43.776 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:43.776 traddr: 10.0.0.2 00:15:43.776 eflags: none 00:15:43.776 sectype: none 00:15:43.776 21:11:21 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:43.776 21:11:21 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:43.776 21:11:21 -- nvmf/common.sh@510 -- # local dev _ 00:15:43.776 21:11:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:43.776 21:11:21 -- nvmf/common.sh@509 -- # nvme list 00:15:43.776 21:11:21 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:43.776 21:11:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:43.776 21:11:21 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:43.776 21:11:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:43.776 21:11:21 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:43.776 21:11:21 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.689 21:11:23 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.689 21:11:23 -- common/autotest_common.sh@1177 -- # local i=0 00:15:45.689 21:11:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.689 21:11:23 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:45.689 21:11:23 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:45.689 21:11:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:47.609 21:11:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:47.609 21:11:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:47.609 21:11:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.609 21:11:25 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:47.609 21:11:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.609 21:11:25 -- common/autotest_common.sh@1187 -- # return 0 00:15:47.609 21:11:25 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:47.609 21:11:25 -- nvmf/common.sh@510 -- # local dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@509 -- # nvme list 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:47.609 /dev/nvme0n1 ]] 00:15:47.609 21:11:25 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:47.609 21:11:25 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:47.609 21:11:25 -- nvmf/common.sh@510 -- # local dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@509 -- # nvme list 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:47.609 21:11:25 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:47.609 21:11:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:47.609 21:11:25 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:47.609 21:11:25 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.870 21:11:25 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.870 21:11:25 -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.130 21:11:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:48.130 21:11:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.130 21:11:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:48.130 21:11:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.130 21:11:25 -- common/autotest_common.sh@1210 -- # return 0 00:15:48.130 21:11:25 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:48.130 21:11:25 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.130 21:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.130 21:11:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.130 21:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.130 21:11:25 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:48.130 21:11:25 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:48.130 21:11:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.130 21:11:25 -- nvmf/common.sh@116 -- # sync 00:15:48.130 21:11:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.130 21:11:26 -- nvmf/common.sh@119 -- # set +e 00:15:48.130 21:11:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.130 21:11:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.130 rmmod nvme_tcp 00:15:48.130 rmmod nvme_fabrics 00:15:48.130 rmmod nvme_keyring 00:15:48.130 21:11:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.130 21:11:26 -- nvmf/common.sh@123 -- # set -e 00:15:48.130 21:11:26 -- nvmf/common.sh@124 -- # return 0 00:15:48.130 21:11:26 -- nvmf/common.sh@477 -- # '[' -n 2320408 ']' 00:15:48.130 21:11:26 -- nvmf/common.sh@478 -- # killprocess 2320408 00:15:48.130 21:11:26 -- common/autotest_common.sh@926 -- # '[' -z 2320408 ']' 00:15:48.130 21:11:26 -- common/autotest_common.sh@930 -- # kill -0 2320408 00:15:48.130 21:11:26 -- common/autotest_common.sh@931 -- # uname 00:15:48.130 21:11:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:48.130 21:11:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2320408 00:15:48.130 21:11:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:48.130 21:11:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:48.130 21:11:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2320408' 00:15:48.130 killing process with pid 2320408 00:15:48.130 21:11:26 -- common/autotest_common.sh@945 -- # kill 2320408 00:15:48.130 21:11:26 -- common/autotest_common.sh@950 -- # wait 2320408 00:15:48.391 21:11:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:48.391 21:11:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:48.391 21:11:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:48.391 21:11:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.391 21:11:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:48.391 21:11:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.391 21:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.391 21:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.307 21:11:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:50.307 00:15:50.307 real 0m14.889s 00:15:50.307 user 0m23.192s 00:15:50.307 sys 0m5.905s 00:15:50.307 21:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.307 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.307 ************************************ 00:15:50.307 END TEST nvmf_nvme_cli 00:15:50.307 ************************************ 00:15:50.307 21:11:28 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:50.307 21:11:28 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:50.307 21:11:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:50.307 21:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.307 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.307 ************************************ 00:15:50.307 START TEST nvmf_host_management 00:15:50.307 ************************************ 00:15:50.307 21:11:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:50.569 * Looking for test storage... 00:15:50.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.569 21:11:28 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.569 21:11:28 -- nvmf/common.sh@7 -- # uname -s 00:15:50.569 21:11:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.569 21:11:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.569 21:11:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.569 21:11:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.569 21:11:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.569 21:11:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.569 21:11:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.569 21:11:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.569 21:11:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.569 21:11:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.569 21:11:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.569 21:11:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.569 21:11:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.569 21:11:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.570 21:11:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.570 21:11:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.570 21:11:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.570 21:11:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.570 21:11:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.570 21:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.570 21:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.570 21:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.570 21:11:28 -- paths/export.sh@5 -- # export PATH 00:15:50.570 21:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.570 21:11:28 -- nvmf/common.sh@46 -- # : 0 00:15:50.570 21:11:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.570 21:11:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.570 21:11:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.570 21:11:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.570 21:11:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.570 21:11:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.570 21:11:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.570 21:11:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.570 21:11:28 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.570 21:11:28 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.570 21:11:28 -- target/host_management.sh@104 -- # nvmftestinit 00:15:50.570 21:11:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.570 21:11:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.570 21:11:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.570 21:11:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.570 21:11:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.570 21:11:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.570 21:11:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.570 21:11:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.570 21:11:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:50.570 21:11:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:50.570 21:11:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:50.570 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.750 21:11:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:58.750 21:11:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:58.750 21:11:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:58.750 21:11:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:58.750 21:11:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:58.750 21:11:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:58.750 21:11:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:58.750 21:11:35 -- nvmf/common.sh@294 -- # net_devs=() 00:15:58.750 21:11:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:58.750 21:11:35 -- nvmf/common.sh@295 -- # e810=() 00:15:58.750 21:11:35 -- nvmf/common.sh@295 -- # local -ga e810 00:15:58.750 21:11:35 -- nvmf/common.sh@296 -- # x722=() 00:15:58.750 21:11:35 -- nvmf/common.sh@296 -- # local -ga x722 00:15:58.750 21:11:35 -- nvmf/common.sh@297 -- # mlx=() 00:15:58.750 21:11:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:58.750 21:11:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.750 21:11:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:58.750 21:11:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:58.750 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:58.750 21:11:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:58.750 21:11:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:58.750 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:58.750 21:11:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:58.750 21:11:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.750 21:11:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.750 21:11:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:58.750 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:58.750 21:11:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:58.750 21:11:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.750 21:11:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.750 21:11:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:58.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:58.750 21:11:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:58.750 21:11:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:58.750 21:11:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.750 21:11:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.750 21:11:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:58.750 21:11:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.750 21:11:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.750 21:11:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:58.750 21:11:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.750 21:11:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.750 21:11:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:58.750 21:11:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:58.750 21:11:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.750 21:11:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.750 21:11:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.750 21:11:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.750 21:11:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:58.750 21:11:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.750 21:11:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.750 21:11:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.750 21:11:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:58.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:15:58.750 00:15:58.750 --- 10.0.0.2 ping statistics --- 00:15:58.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.750 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:15:58.750 21:11:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:15:58.750 00:15:58.750 --- 10.0.0.1 ping statistics --- 00:15:58.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.750 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:15:58.750 21:11:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.750 21:11:35 -- nvmf/common.sh@410 -- # return 0 00:15:58.750 21:11:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:58.750 21:11:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.750 21:11:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:58.750 21:11:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.750 21:11:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:58.750 21:11:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:58.750 21:11:35 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:58.750 21:11:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:58.750 21:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.750 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 ************************************ 00:15:58.751 START TEST nvmf_host_management 00:15:58.751 ************************************ 00:15:58.751 21:11:35 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:15:58.751 21:11:35 -- target/host_management.sh@69 -- # starttarget 00:15:58.751 21:11:35 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:58.751 21:11:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:58.751 21:11:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:58.751 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 21:11:35 -- nvmf/common.sh@469 -- # nvmfpid=2325729 00:15:58.751 21:11:35 -- nvmf/common.sh@470 -- # waitforlisten 2325729 00:15:58.751 21:11:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:58.751 21:11:35 -- common/autotest_common.sh@819 -- # '[' -z 2325729 ']' 00:15:58.751 21:11:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.751 21:11:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.751 21:11:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.751 21:11:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.751 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 [2024-06-08 21:11:35.819841] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:58.751 [2024-06-08 21:11:35.819902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.751 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.751 [2024-06-08 21:11:35.908478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.751 [2024-06-08 21:11:36.000151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.751 [2024-06-08 21:11:36.000306] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.751 [2024-06-08 21:11:36.000315] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.751 [2024-06-08 21:11:36.000323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.751 [2024-06-08 21:11:36.000462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.751 [2024-06-08 21:11:36.000687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.751 [2024-06-08 21:11:36.000852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:58.751 [2024-06-08 21:11:36.000853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.751 21:11:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.751 21:11:36 -- common/autotest_common.sh@852 -- # return 0 00:15:58.751 21:11:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.751 21:11:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 21:11:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.751 21:11:36 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.751 21:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 [2024-06-08 21:11:36.643448] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.751 21:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.751 21:11:36 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:58.751 21:11:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 21:11:36 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:58.751 21:11:36 -- target/host_management.sh@23 -- # cat 00:15:58.751 21:11:36 -- target/host_management.sh@30 -- # rpc_cmd 00:15:58.751 21:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 Malloc0 00:15:58.751 [2024-06-08 21:11:36.702697] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.751 21:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.751 21:11:36 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:58.751 21:11:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 21:11:36 -- target/host_management.sh@73 -- # perfpid=2325879 00:15:58.751 21:11:36 -- target/host_management.sh@74 -- # waitforlisten 2325879 /var/tmp/bdevperf.sock 00:15:58.751 21:11:36 -- common/autotest_common.sh@819 -- # '[' -z 2325879 ']' 00:15:58.751 21:11:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.751 21:11:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.751 21:11:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.751 21:11:36 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:58.751 21:11:36 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:58.751 21:11:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.751 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:58.751 21:11:36 -- nvmf/common.sh@520 -- # config=() 00:15:58.751 21:11:36 -- nvmf/common.sh@520 -- # local subsystem config 00:15:58.751 21:11:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:58.751 21:11:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:58.751 { 00:15:58.751 "params": { 00:15:58.751 "name": "Nvme$subsystem", 00:15:58.751 "trtype": "$TEST_TRANSPORT", 00:15:58.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.751 "adrfam": "ipv4", 00:15:58.751 "trsvcid": "$NVMF_PORT", 00:15:58.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.751 "hdgst": ${hdgst:-false}, 00:15:58.751 "ddgst": ${ddgst:-false} 00:15:58.751 }, 00:15:58.751 "method": "bdev_nvme_attach_controller" 00:15:58.751 } 00:15:58.751 EOF 00:15:58.751 )") 00:15:58.751 21:11:36 -- nvmf/common.sh@542 -- # cat 00:15:58.751 21:11:36 -- nvmf/common.sh@544 -- # jq . 00:15:58.751 21:11:36 -- nvmf/common.sh@545 -- # IFS=, 00:15:58.751 21:11:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:58.751 "params": { 00:15:58.751 "name": "Nvme0", 00:15:58.751 "trtype": "tcp", 00:15:58.751 "traddr": "10.0.0.2", 00:15:58.751 "adrfam": "ipv4", 00:15:58.751 "trsvcid": "4420", 00:15:58.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:58.751 "hdgst": false, 00:15:58.751 "ddgst": false 00:15:58.751 }, 00:15:58.751 "method": "bdev_nvme_attach_controller" 00:15:58.751 }' 00:15:58.751 [2024-06-08 21:11:36.807100] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:58.751 [2024-06-08 21:11:36.807165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325879 ] 00:15:58.751 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.013 [2024-06-08 21:11:36.866473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.013 [2024-06-08 21:11:36.929244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.273 Running I/O for 10 seconds... 00:15:59.534 21:11:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.534 21:11:37 -- common/autotest_common.sh@852 -- # return 0 00:15:59.534 21:11:37 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:59.534 21:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.534 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:59.534 21:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.534 21:11:37 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:59.534 21:11:37 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:59.534 21:11:37 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:59.534 21:11:37 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:59.534 21:11:37 -- target/host_management.sh@52 -- # local ret=1 00:15:59.534 21:11:37 -- target/host_management.sh@53 -- # local i 00:15:59.534 21:11:37 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:59.534 21:11:37 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:59.534 21:11:37 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:59.534 21:11:37 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:59.534 21:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.534 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:59.534 21:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.797 21:11:37 -- target/host_management.sh@55 -- # read_io_count=1106 00:15:59.797 21:11:37 -- target/host_management.sh@58 -- # '[' 1106 -ge 100 ']' 00:15:59.797 21:11:37 -- target/host_management.sh@59 -- # ret=0 00:15:59.797 21:11:37 -- target/host_management.sh@60 -- # break 00:15:59.797 21:11:37 -- target/host_management.sh@64 -- # return 0 00:15:59.797 21:11:37 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:59.797 21:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.797 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:59.797 [2024-06-08 21:11:37.633797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.797 [2024-06-08 21:11:37.633921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.633994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac530 is same with the state(5) to be set 00:15:59.798 [2024-06-08 21:11:37.634624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.634987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.634996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.635004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.635013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.635020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.635029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.635036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.635045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.635053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.798 [2024-06-08 21:11:37.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.798 [2024-06-08 21:11:37.635069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.799 [2024-06-08 21:11:37.635619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.799 [2024-06-08 21:11:37.635628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:59.800 [2024-06-08 21:11:37.635715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.635723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221a370 is same with the state(5) to be set 00:15:59.800 [2024-06-08 21:11:37.635764] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x221a370 was disconnected and freed. reset controller. 00:15:59.800 [2024-06-08 21:11:37.636950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:59.800 task offset: 28416 on job bdev=Nvme0n1 fails 00:15:59.800 00:15:59.800 Latency(us) 00:15:59.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.800 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:59.800 Job: Nvme0n1 ended in about 0.52 seconds with error 00:15:59.800 Verification LBA range: start 0x0 length 0x400 00:15:59.800 Nvme0n1 : 0.52 2339.05 146.19 122.10 0.00 25630.95 1884.16 36481.71 00:15:59.800 =================================================================================================================== 00:15:59.800 Total : 2339.05 146.19 122.10 0.00 25630.95 1884.16 36481.71 00:15:59.800 21:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.800 [2024-06-08 21:11:37.638940] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:59.800 [2024-06-08 21:11:37.638964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221c9c0 (9): Bad file descriptor 00:15:59.800 21:11:37 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:59.800 21:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.800 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 [2024-06-08 21:11:37.645503] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:59.800 [2024-06-08 21:11:37.645628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:59.800 [2024-06-08 21:11:37.645658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.800 [2024-06-08 21:11:37.645675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:59.800 [2024-06-08 21:11:37.645683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:59.800 [2024-06-08 21:11:37.645690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:59.800 [2024-06-08 21:11:37.645697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x221c9c0 00:15:59.800 [2024-06-08 21:11:37.645718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221c9c0 (9): Bad file descriptor 00:15:59.800 [2024-06-08 21:11:37.645731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:59.800 [2024-06-08 21:11:37.645738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:59.800 [2024-06-08 21:11:37.645746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:59.800 [2024-06-08 21:11:37.645758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:59.800 21:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.800 21:11:37 -- target/host_management.sh@87 -- # sleep 1 00:16:00.742 21:11:38 -- target/host_management.sh@91 -- # kill -9 2325879 00:16:00.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2325879) - No such process 00:16:00.742 21:11:38 -- target/host_management.sh@91 -- # true 00:16:00.742 21:11:38 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:00.742 21:11:38 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:00.742 21:11:38 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:00.742 21:11:38 -- nvmf/common.sh@520 -- # config=() 00:16:00.742 21:11:38 -- nvmf/common.sh@520 -- # local subsystem config 00:16:00.742 21:11:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:00.742 21:11:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:00.742 { 00:16:00.742 "params": { 00:16:00.742 "name": "Nvme$subsystem", 00:16:00.742 "trtype": "$TEST_TRANSPORT", 00:16:00.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:00.742 "adrfam": "ipv4", 00:16:00.742 "trsvcid": "$NVMF_PORT", 00:16:00.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:00.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:00.742 "hdgst": ${hdgst:-false}, 00:16:00.742 "ddgst": ${ddgst:-false} 00:16:00.742 }, 00:16:00.742 "method": "bdev_nvme_attach_controller" 00:16:00.742 } 00:16:00.742 EOF 00:16:00.742 )") 00:16:00.742 21:11:38 -- nvmf/common.sh@542 -- # cat 00:16:00.742 21:11:38 -- nvmf/common.sh@544 -- # jq . 00:16:00.742 21:11:38 -- nvmf/common.sh@545 -- # IFS=, 00:16:00.742 21:11:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:00.742 "params": { 00:16:00.742 "name": "Nvme0", 00:16:00.742 "trtype": "tcp", 00:16:00.742 "traddr": "10.0.0.2", 00:16:00.742 "adrfam": "ipv4", 00:16:00.742 "trsvcid": "4420", 00:16:00.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:00.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:00.742 "hdgst": false, 00:16:00.742 "ddgst": false 00:16:00.742 }, 00:16:00.742 "method": "bdev_nvme_attach_controller" 00:16:00.742 }' 00:16:00.742 [2024-06-08 21:11:38.712754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:00.742 [2024-06-08 21:11:38.712819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326237 ] 00:16:00.742 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.742 [2024-06-08 21:11:38.773195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.003 [2024-06-08 21:11:38.835445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.003 Running I/O for 1 seconds... 00:16:01.946 00:16:01.946 Latency(us) 00:16:01.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.946 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:01.946 Verification LBA range: start 0x0 length 0x400 00:16:01.946 Nvme0n1 : 1.06 2337.04 146.06 0.00 0.00 25984.26 5079.04 45875.20 00:16:01.946 =================================================================================================================== 00:16:01.946 Total : 2337.04 146.06 0.00 0.00 25984.26 5079.04 45875.20 00:16:02.206 21:11:40 -- target/host_management.sh@101 -- # stoptarget 00:16:02.206 21:11:40 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:02.206 21:11:40 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:02.206 21:11:40 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:02.206 21:11:40 -- target/host_management.sh@40 -- # nvmftestfini 00:16:02.206 21:11:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.206 21:11:40 -- nvmf/common.sh@116 -- # sync 00:16:02.206 21:11:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.206 21:11:40 -- nvmf/common.sh@119 -- # set +e 00:16:02.206 21:11:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.206 21:11:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.206 rmmod nvme_tcp 00:16:02.206 rmmod nvme_fabrics 00:16:02.206 rmmod nvme_keyring 00:16:02.206 21:11:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.206 21:11:40 -- nvmf/common.sh@123 -- # set -e 00:16:02.206 21:11:40 -- nvmf/common.sh@124 -- # return 0 00:16:02.206 21:11:40 -- nvmf/common.sh@477 -- # '[' -n 2325729 ']' 00:16:02.206 21:11:40 -- nvmf/common.sh@478 -- # killprocess 2325729 00:16:02.206 21:11:40 -- common/autotest_common.sh@926 -- # '[' -z 2325729 ']' 00:16:02.206 21:11:40 -- common/autotest_common.sh@930 -- # kill -0 2325729 00:16:02.206 21:11:40 -- common/autotest_common.sh@931 -- # uname 00:16:02.206 21:11:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.206 21:11:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2325729 00:16:02.207 21:11:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:02.207 21:11:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:02.207 21:11:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2325729' 00:16:02.207 killing process with pid 2325729 00:16:02.207 21:11:40 -- common/autotest_common.sh@945 -- # kill 2325729 00:16:02.207 21:11:40 -- common/autotest_common.sh@950 -- # wait 2325729 00:16:02.467 [2024-06-08 21:11:40.390506] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:02.467 21:11:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:02.467 21:11:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:02.467 21:11:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:02.467 21:11:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.467 21:11:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:02.467 21:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.467 21:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.467 21:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.012 21:11:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:05.012 00:16:05.012 real 0m6.720s 00:16:05.012 user 0m19.967s 00:16:05.012 sys 0m1.115s 00:16:05.012 21:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.012 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.012 ************************************ 00:16:05.012 END TEST nvmf_host_management 00:16:05.012 ************************************ 00:16:05.012 21:11:42 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:05.012 00:16:05.012 real 0m14.141s 00:16:05.012 user 0m21.979s 00:16:05.012 sys 0m6.459s 00:16:05.012 21:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.012 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.012 ************************************ 00:16:05.012 END TEST nvmf_host_management 00:16:05.012 ************************************ 00:16:05.012 21:11:42 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:05.012 21:11:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.012 21:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.012 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.012 ************************************ 00:16:05.012 START TEST nvmf_lvol 00:16:05.012 ************************************ 00:16:05.012 21:11:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:05.012 * Looking for test storage... 00:16:05.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.012 21:11:42 -- nvmf/common.sh@7 -- # uname -s 00:16:05.012 21:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.012 21:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.012 21:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.012 21:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.012 21:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.012 21:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.012 21:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.012 21:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.012 21:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.012 21:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.012 21:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.012 21:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.012 21:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.012 21:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.012 21:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.012 21:11:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.012 21:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.012 21:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.012 21:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.012 21:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.012 21:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.012 21:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.012 21:11:42 -- paths/export.sh@5 -- # export PATH 00:16:05.012 21:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.012 21:11:42 -- nvmf/common.sh@46 -- # : 0 00:16:05.012 21:11:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.012 21:11:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.012 21:11:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.012 21:11:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.012 21:11:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.012 21:11:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.012 21:11:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.012 21:11:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:05.012 21:11:42 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:05.012 21:11:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:05.012 21:11:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.012 21:11:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:05.012 21:11:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:05.012 21:11:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:05.012 21:11:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.012 21:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.012 21:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.012 21:11:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:05.012 21:11:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:05.012 21:11:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:05.012 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:11.600 21:11:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:11.600 21:11:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:11.600 21:11:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:11.600 21:11:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:11.600 21:11:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:11.600 21:11:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:11.600 21:11:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:11.600 21:11:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:11.600 21:11:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:11.600 21:11:49 -- nvmf/common.sh@295 -- # e810=() 00:16:11.600 21:11:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:11.600 21:11:49 -- nvmf/common.sh@296 -- # x722=() 00:16:11.600 21:11:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:11.600 21:11:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:11.600 21:11:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:11.600 21:11:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.600 21:11:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:11.600 21:11:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:11.600 21:11:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:11.600 21:11:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:11.600 21:11:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:11.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:11.600 21:11:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:11.600 21:11:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:11.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:11.600 21:11:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:11.600 21:11:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:11.600 21:11:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:11.600 21:11:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.600 21:11:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:11.600 21:11:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.601 21:11:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:11.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:11.601 21:11:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.601 21:11:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:11.601 21:11:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.601 21:11:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:11.601 21:11:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.601 21:11:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:11.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:11.601 21:11:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.601 21:11:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:11.601 21:11:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:11.601 21:11:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:11.601 21:11:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:11.601 21:11:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:11.601 21:11:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.601 21:11:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.601 21:11:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.601 21:11:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:11.601 21:11:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.601 21:11:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.601 21:11:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:11.601 21:11:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.601 21:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.601 21:11:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:11.601 21:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:11.601 21:11:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.601 21:11:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.601 21:11:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.601 21:11:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.601 21:11:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:11.601 21:11:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.863 21:11:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.863 21:11:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.863 21:11:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:11.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:16:11.863 00:16:11.863 --- 10.0.0.2 ping statistics --- 00:16:11.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.863 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:16:11.863 21:11:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:16:11.863 00:16:11.863 --- 10.0.0.1 ping statistics --- 00:16:11.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.863 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:16:11.863 21:11:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.863 21:11:49 -- nvmf/common.sh@410 -- # return 0 00:16:11.863 21:11:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.863 21:11:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.863 21:11:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.863 21:11:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.863 21:11:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.863 21:11:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.863 21:11:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.863 21:11:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:11.863 21:11:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.863 21:11:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:11.863 21:11:49 -- common/autotest_common.sh@10 -- # set +x 00:16:11.863 21:11:49 -- nvmf/common.sh@469 -- # nvmfpid=2330863 00:16:11.863 21:11:49 -- nvmf/common.sh@470 -- # waitforlisten 2330863 00:16:11.863 21:11:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.863 21:11:49 -- common/autotest_common.sh@819 -- # '[' -z 2330863 ']' 00:16:11.863 21:11:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.863 21:11:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.863 21:11:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.863 21:11:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.863 21:11:49 -- common/autotest_common.sh@10 -- # set +x 00:16:11.863 [2024-06-08 21:11:49.881225] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:11.863 [2024-06-08 21:11:49.881291] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.863 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.863 [2024-06-08 21:11:49.951293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.124 [2024-06-08 21:11:50.026396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:12.124 [2024-06-08 21:11:50.026531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.124 [2024-06-08 21:11:50.026539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.124 [2024-06-08 21:11:50.026546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.124 [2024-06-08 21:11:50.026686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.124 [2024-06-08 21:11:50.026801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.124 [2024-06-08 21:11:50.026803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.696 21:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.696 21:11:50 -- common/autotest_common.sh@852 -- # return 0 00:16:12.696 21:11:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.696 21:11:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:12.696 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.696 21:11:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.696 21:11:50 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.957 [2024-06-08 21:11:50.823873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.957 21:11:50 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.957 21:11:51 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:12.957 21:11:51 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.219 21:11:51 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:13.219 21:11:51 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:13.480 21:11:51 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:13.480 21:11:51 -- target/nvmf_lvol.sh@29 -- # lvs=cc8f22ea-1baa-4bb0-beaa-41c3fa3d595a 00:16:13.480 21:11:51 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc8f22ea-1baa-4bb0-beaa-41c3fa3d595a lvol 20 00:16:13.741 21:11:51 -- target/nvmf_lvol.sh@32 -- # lvol=64a93729-3fd9-40b4-8498-7adf94ff9666 00:16:13.741 21:11:51 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:14.002 21:11:51 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 64a93729-3fd9-40b4-8498-7adf94ff9666 00:16:14.002 21:11:52 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:14.262 [2024-06-08 21:11:52.144133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.262 21:11:52 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.262 21:11:52 -- target/nvmf_lvol.sh@42 -- # perf_pid=2331304 00:16:14.262 21:11:52 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:14.262 21:11:52 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:14.524 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.505 21:11:53 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 64a93729-3fd9-40b4-8498-7adf94ff9666 MY_SNAPSHOT 00:16:15.505 21:11:53 -- target/nvmf_lvol.sh@47 -- # snapshot=2b2bd707-f438-41a3-b29f-bab8f4424932 00:16:15.505 21:11:53 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 64a93729-3fd9-40b4-8498-7adf94ff9666 30 00:16:15.772 21:11:53 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2b2bd707-f438-41a3-b29f-bab8f4424932 MY_CLONE 00:16:16.034 21:11:53 -- target/nvmf_lvol.sh@49 -- # clone=36eadda0-7c35-40b6-bd3d-1971559607b2 00:16:16.034 21:11:53 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 36eadda0-7c35-40b6-bd3d-1971559607b2 00:16:16.295 21:11:54 -- target/nvmf_lvol.sh@53 -- # wait 2331304 00:16:26.297 Initializing NVMe Controllers 00:16:26.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:26.297 Controller IO queue size 128, less than required. 00:16:26.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:26.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:26.297 Initialization complete. Launching workers. 00:16:26.297 ======================================================== 00:16:26.297 Latency(us) 00:16:26.297 Device Information : IOPS MiB/s Average min max 00:16:26.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12360.60 48.28 10358.41 1426.60 42555.17 00:16:26.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17887.30 69.87 7157.87 1109.19 53826.62 00:16:26.297 ======================================================== 00:16:26.297 Total : 30247.90 118.16 8465.75 1109.19 53826.62 00:16:26.297 00:16:26.297 21:12:02 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:26.297 21:12:02 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64a93729-3fd9-40b4-8498-7adf94ff9666 00:16:26.297 21:12:02 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc8f22ea-1baa-4bb0-beaa-41c3fa3d595a 00:16:26.297 21:12:03 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:26.297 21:12:03 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:26.297 21:12:03 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:26.297 21:12:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.297 21:12:03 -- nvmf/common.sh@116 -- # sync 00:16:26.297 21:12:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.297 21:12:03 -- nvmf/common.sh@119 -- # set +e 00:16:26.297 21:12:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.297 21:12:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.297 rmmod nvme_tcp 00:16:26.297 rmmod nvme_fabrics 00:16:26.297 rmmod nvme_keyring 00:16:26.297 21:12:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.297 21:12:03 -- nvmf/common.sh@123 -- # set -e 00:16:26.297 21:12:03 -- nvmf/common.sh@124 -- # return 0 00:16:26.297 21:12:03 -- nvmf/common.sh@477 -- # '[' -n 2330863 ']' 00:16:26.297 21:12:03 -- nvmf/common.sh@478 -- # killprocess 2330863 00:16:26.297 21:12:03 -- common/autotest_common.sh@926 -- # '[' -z 2330863 ']' 00:16:26.297 21:12:03 -- common/autotest_common.sh@930 -- # kill -0 2330863 00:16:26.297 21:12:03 -- common/autotest_common.sh@931 -- # uname 00:16:26.297 21:12:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.297 21:12:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2330863 00:16:26.297 21:12:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:26.297 21:12:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:26.297 21:12:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2330863' 00:16:26.297 killing process with pid 2330863 00:16:26.297 21:12:03 -- common/autotest_common.sh@945 -- # kill 2330863 00:16:26.297 21:12:03 -- common/autotest_common.sh@950 -- # wait 2330863 00:16:26.297 21:12:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:26.297 21:12:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:26.297 21:12:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:26.297 21:12:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.297 21:12:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:26.297 21:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.297 21:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.297 21:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.681 21:12:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:27.681 00:16:27.681 real 0m22.932s 00:16:27.681 user 1m3.151s 00:16:27.681 sys 0m7.591s 00:16:27.681 21:12:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.681 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:27.681 ************************************ 00:16:27.681 END TEST nvmf_lvol 00:16:27.681 ************************************ 00:16:27.682 21:12:05 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:27.682 21:12:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:27.682 21:12:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.682 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:27.682 ************************************ 00:16:27.682 START TEST nvmf_lvs_grow 00:16:27.682 ************************************ 00:16:27.682 21:12:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:27.682 * Looking for test storage... 00:16:27.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.682 21:12:05 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.682 21:12:05 -- nvmf/common.sh@7 -- # uname -s 00:16:27.682 21:12:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.682 21:12:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.682 21:12:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.682 21:12:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.682 21:12:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.682 21:12:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.682 21:12:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.682 21:12:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.682 21:12:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.682 21:12:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.682 21:12:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.682 21:12:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.682 21:12:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.682 21:12:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.682 21:12:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.682 21:12:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.682 21:12:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.682 21:12:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.682 21:12:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.682 21:12:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.682 21:12:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.682 21:12:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.682 21:12:05 -- paths/export.sh@5 -- # export PATH 00:16:27.682 21:12:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.682 21:12:05 -- nvmf/common.sh@46 -- # : 0 00:16:27.682 21:12:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:27.682 21:12:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:27.682 21:12:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:27.682 21:12:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.682 21:12:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.682 21:12:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:27.682 21:12:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:27.682 21:12:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:27.682 21:12:05 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:27.682 21:12:05 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.682 21:12:05 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:27.682 21:12:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:27.682 21:12:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.682 21:12:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:27.682 21:12:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:27.682 21:12:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:27.682 21:12:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.682 21:12:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.682 21:12:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.682 21:12:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:27.682 21:12:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:27.682 21:12:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:27.682 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.832 21:12:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:35.832 21:12:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:35.832 21:12:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:35.832 21:12:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:35.832 21:12:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:35.832 21:12:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:35.832 21:12:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:35.832 21:12:12 -- nvmf/common.sh@294 -- # net_devs=() 00:16:35.832 21:12:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:35.832 21:12:12 -- nvmf/common.sh@295 -- # e810=() 00:16:35.832 21:12:12 -- nvmf/common.sh@295 -- # local -ga e810 00:16:35.832 21:12:12 -- nvmf/common.sh@296 -- # x722=() 00:16:35.832 21:12:12 -- nvmf/common.sh@296 -- # local -ga x722 00:16:35.832 21:12:12 -- nvmf/common.sh@297 -- # mlx=() 00:16:35.832 21:12:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:35.832 21:12:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.832 21:12:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:35.832 21:12:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:35.832 21:12:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:35.832 21:12:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:35.832 21:12:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:35.832 21:12:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.833 21:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:35.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:35.833 21:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.833 21:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:35.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:35.833 21:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.833 21:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.833 21:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.833 21:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:35.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:35.833 21:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.833 21:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.833 21:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.833 21:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.833 21:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:35.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:35.833 21:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.833 21:12:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:35.833 21:12:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:35.833 21:12:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.833 21:12:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.833 21:12:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.833 21:12:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:35.833 21:12:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.833 21:12:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.833 21:12:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:35.833 21:12:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.833 21:12:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.833 21:12:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:35.833 21:12:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:35.833 21:12:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.833 21:12:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.833 21:12:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.833 21:12:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.833 21:12:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:35.833 21:12:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.833 21:12:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.833 21:12:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.833 21:12:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:35.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:16:35.833 00:16:35.833 --- 10.0.0.2 ping statistics --- 00:16:35.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.833 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:16:35.833 21:12:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:16:35.833 00:16:35.833 --- 10.0.0.1 ping statistics --- 00:16:35.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.833 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:16:35.833 21:12:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.833 21:12:12 -- nvmf/common.sh@410 -- # return 0 00:16:35.833 21:12:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.833 21:12:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.833 21:12:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.833 21:12:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.833 21:12:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.833 21:12:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.833 21:12:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:35.833 21:12:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.833 21:12:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:35.833 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 21:12:12 -- nvmf/common.sh@469 -- # nvmfpid=2338249 00:16:35.833 21:12:12 -- nvmf/common.sh@470 -- # waitforlisten 2338249 00:16:35.833 21:12:12 -- common/autotest_common.sh@819 -- # '[' -z 2338249 ']' 00:16:35.833 21:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.833 21:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:35.833 21:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.833 21:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:35.833 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 21:12:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:35.833 [2024-06-08 21:12:12.845377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:35.833 [2024-06-08 21:12:12.845436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.833 [2024-06-08 21:12:12.911976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.833 [2024-06-08 21:12:12.980218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.833 [2024-06-08 21:12:12.980339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.833 [2024-06-08 21:12:12.980348] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.833 [2024-06-08 21:12:12.980354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.833 [2024-06-08 21:12:12.980373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.833 21:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.833 21:12:13 -- common/autotest_common.sh@852 -- # return 0 00:16:35.833 21:12:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.833 21:12:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.833 21:12:13 -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 21:12:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:35.833 [2024-06-08 21:12:13.767134] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:35.833 21:12:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:35.833 21:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:35.833 21:12:13 -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 ************************************ 00:16:35.833 START TEST lvs_grow_clean 00:16:35.833 ************************************ 00:16:35.833 21:12:13 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:35.833 21:12:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:35.834 21:12:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:35.834 21:12:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:35.834 21:12:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:35.834 21:12:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:36.095 21:12:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:36.095 21:12:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:36.095 21:12:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:36.095 21:12:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:36.095 21:12:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 lvol 150 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6840ecf-39fa-4154-9e7e-75486123cdee 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.356 21:12:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:36.617 [2024-06-08 21:12:14.569945] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:36.617 [2024-06-08 21:12:14.569996] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:36.617 true 00:16:36.617 21:12:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:36.617 21:12:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:36.877 21:12:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:36.877 21:12:14 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:36.877 21:12:14 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6840ecf-39fa-4154-9e7e-75486123cdee 00:16:37.138 21:12:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:37.138 [2024-06-08 21:12:15.135742] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.138 21:12:15 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.399 21:12:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2338645 00:16:37.399 21:12:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.399 21:12:15 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:37.399 21:12:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2338645 /var/tmp/bdevperf.sock 00:16:37.399 21:12:15 -- common/autotest_common.sh@819 -- # '[' -z 2338645 ']' 00:16:37.399 21:12:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.399 21:12:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.399 21:12:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.399 21:12:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.399 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:16:37.399 [2024-06-08 21:12:15.328732] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:37.399 [2024-06-08 21:12:15.328781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338645 ] 00:16:37.399 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.399 [2024-06-08 21:12:15.403361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.399 [2024-06-08 21:12:15.465496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.341 21:12:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:38.341 21:12:16 -- common/autotest_common.sh@852 -- # return 0 00:16:38.341 21:12:16 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:38.602 Nvme0n1 00:16:38.602 21:12:16 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:38.602 [ 00:16:38.602 { 00:16:38.602 "name": "Nvme0n1", 00:16:38.602 "aliases": [ 00:16:38.602 "c6840ecf-39fa-4154-9e7e-75486123cdee" 00:16:38.602 ], 00:16:38.602 "product_name": "NVMe disk", 00:16:38.602 "block_size": 4096, 00:16:38.602 "num_blocks": 38912, 00:16:38.602 "uuid": "c6840ecf-39fa-4154-9e7e-75486123cdee", 00:16:38.602 "assigned_rate_limits": { 00:16:38.602 "rw_ios_per_sec": 0, 00:16:38.602 "rw_mbytes_per_sec": 0, 00:16:38.602 "r_mbytes_per_sec": 0, 00:16:38.602 "w_mbytes_per_sec": 0 00:16:38.602 }, 00:16:38.602 "claimed": false, 00:16:38.602 "zoned": false, 00:16:38.602 "supported_io_types": { 00:16:38.602 "read": true, 00:16:38.602 "write": true, 00:16:38.602 "unmap": true, 00:16:38.602 "write_zeroes": true, 00:16:38.602 "flush": true, 00:16:38.602 "reset": true, 00:16:38.602 "compare": true, 00:16:38.602 "compare_and_write": true, 00:16:38.602 "abort": true, 00:16:38.602 "nvme_admin": true, 00:16:38.602 "nvme_io": true 00:16:38.602 }, 00:16:38.602 "driver_specific": { 00:16:38.602 "nvme": [ 00:16:38.602 { 00:16:38.602 "trid": { 00:16:38.602 "trtype": "TCP", 00:16:38.602 "adrfam": "IPv4", 00:16:38.602 "traddr": "10.0.0.2", 00:16:38.602 "trsvcid": "4420", 00:16:38.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:38.602 }, 00:16:38.602 "ctrlr_data": { 00:16:38.602 "cntlid": 1, 00:16:38.602 "vendor_id": "0x8086", 00:16:38.602 "model_number": "SPDK bdev Controller", 00:16:38.602 "serial_number": "SPDK0", 00:16:38.602 "firmware_revision": "24.01.1", 00:16:38.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:38.603 "oacs": { 00:16:38.603 "security": 0, 00:16:38.603 "format": 0, 00:16:38.603 "firmware": 0, 00:16:38.603 "ns_manage": 0 00:16:38.603 }, 00:16:38.603 "multi_ctrlr": true, 00:16:38.603 "ana_reporting": false 00:16:38.603 }, 00:16:38.603 "vs": { 00:16:38.603 "nvme_version": "1.3" 00:16:38.603 }, 00:16:38.603 "ns_data": { 00:16:38.603 "id": 1, 00:16:38.603 "can_share": true 00:16:38.603 } 00:16:38.603 } 00:16:38.603 ], 00:16:38.603 "mp_policy": "active_passive" 00:16:38.603 } 00:16:38.603 } 00:16:38.603 ] 00:16:38.603 21:12:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2338982 00:16:38.603 21:12:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:38.603 21:12:16 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:38.864 Running I/O for 10 seconds... 00:16:39.805 Latency(us) 00:16:39.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.805 Nvme0n1 : 1.00 17676.00 69.05 0.00 0.00 0.00 0.00 0.00 00:16:39.805 =================================================================================================================== 00:16:39.805 Total : 17676.00 69.05 0.00 0.00 0.00 0.00 0.00 00:16:39.805 00:16:40.746 21:12:18 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:40.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.746 Nvme0n1 : 2.00 17826.00 69.63 0.00 0.00 0.00 0.00 0.00 00:16:40.746 =================================================================================================================== 00:16:40.746 Total : 17826.00 69.63 0.00 0.00 0.00 0.00 0.00 00:16:40.746 00:16:40.746 true 00:16:40.746 21:12:18 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:40.746 21:12:18 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:41.007 21:12:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:41.007 21:12:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:41.007 21:12:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 2338982 00:16:41.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.950 Nvme0n1 : 3.00 17873.33 69.82 0.00 0.00 0.00 0.00 0.00 00:16:41.950 =================================================================================================================== 00:16:41.950 Total : 17873.33 69.82 0.00 0.00 0.00 0.00 0.00 00:16:41.950 00:16:42.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.919 Nvme0n1 : 4.00 17911.00 69.96 0.00 0.00 0.00 0.00 0.00 00:16:42.919 =================================================================================================================== 00:16:42.919 Total : 17911.00 69.96 0.00 0.00 0.00 0.00 0.00 00:16:42.919 00:16:43.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.859 Nvme0n1 : 5.00 17944.80 70.10 0.00 0.00 0.00 0.00 0.00 00:16:43.859 =================================================================================================================== 00:16:43.859 Total : 17944.80 70.10 0.00 0.00 0.00 0.00 0.00 00:16:43.859 00:16:44.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.802 Nvme0n1 : 6.00 17972.67 70.21 0.00 0.00 0.00 0.00 0.00 00:16:44.802 =================================================================================================================== 00:16:44.802 Total : 17972.67 70.21 0.00 0.00 0.00 0.00 0.00 00:16:44.802 00:16:45.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.744 Nvme0n1 : 7.00 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:16:45.744 =================================================================================================================== 00:16:45.744 Total : 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:16:45.744 00:16:46.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.687 Nvme0n1 : 8.00 18013.50 70.37 0.00 0.00 0.00 0.00 0.00 00:16:46.687 =================================================================================================================== 00:16:46.687 Total : 18013.50 70.37 0.00 0.00 0.00 0.00 0.00 00:16:46.687 00:16:48.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.072 Nvme0n1 : 9.00 18028.00 70.42 0.00 0.00 0.00 0.00 0.00 00:16:48.072 =================================================================================================================== 00:16:48.072 Total : 18028.00 70.42 0.00 0.00 0.00 0.00 0.00 00:16:48.072 00:16:48.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.644 Nvme0n1 : 10.00 18039.60 70.47 0.00 0.00 0.00 0.00 0.00 00:16:48.644 =================================================================================================================== 00:16:48.644 Total : 18039.60 70.47 0.00 0.00 0.00 0.00 0.00 00:16:48.644 00:16:48.644 00:16:48.644 Latency(us) 00:16:48.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.644 Nvme0n1 : 10.01 18039.43 70.47 0.00 0.00 7090.80 5215.57 17803.95 00:16:48.644 =================================================================================================================== 00:16:48.644 Total : 18039.43 70.47 0.00 0.00 7090.80 5215.57 17803.95 00:16:48.644 0 00:16:48.904 21:12:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2338645 00:16:48.904 21:12:26 -- common/autotest_common.sh@926 -- # '[' -z 2338645 ']' 00:16:48.904 21:12:26 -- common/autotest_common.sh@930 -- # kill -0 2338645 00:16:48.904 21:12:26 -- common/autotest_common.sh@931 -- # uname 00:16:48.904 21:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.904 21:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2338645 00:16:48.904 21:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:48.904 21:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:48.904 21:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2338645' 00:16:48.904 killing process with pid 2338645 00:16:48.904 21:12:26 -- common/autotest_common.sh@945 -- # kill 2338645 00:16:48.904 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.904 00:16:48.904 Latency(us) 00:16:48.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.904 =================================================================================================================== 00:16:48.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.904 21:12:26 -- common/autotest_common.sh@950 -- # wait 2338645 00:16:48.905 21:12:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:49.165 21:12:27 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:49.165 21:12:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:49.165 21:12:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:49.165 21:12:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:49.165 21:12:27 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:49.425 [2024-06-08 21:12:27.368648] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:49.425 21:12:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:49.425 21:12:27 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.425 21:12:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:49.425 21:12:27 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.425 21:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.425 21:12:27 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.425 21:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.425 21:12:27 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.425 21:12:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.425 21:12:27 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.425 21:12:27 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.426 21:12:27 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:49.686 request: 00:16:49.686 { 00:16:49.686 "uuid": "42d3273e-63e8-4c5e-a43e-764ca6adf326", 00:16:49.686 "method": "bdev_lvol_get_lvstores", 00:16:49.686 "req_id": 1 00:16:49.686 } 00:16:49.686 Got JSON-RPC error response 00:16:49.686 response: 00:16:49.686 { 00:16:49.686 "code": -19, 00:16:49.686 "message": "No such device" 00:16:49.686 } 00:16:49.686 21:12:27 -- common/autotest_common.sh@643 -- # es=1 00:16:49.686 21:12:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:49.686 21:12:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:49.686 21:12:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:49.686 21:12:27 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:49.686 aio_bdev 00:16:49.686 21:12:27 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c6840ecf-39fa-4154-9e7e-75486123cdee 00:16:49.686 21:12:27 -- common/autotest_common.sh@887 -- # local bdev_name=c6840ecf-39fa-4154-9e7e-75486123cdee 00:16:49.686 21:12:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.686 21:12:27 -- common/autotest_common.sh@889 -- # local i 00:16:49.686 21:12:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.686 21:12:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.686 21:12:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:49.947 21:12:27 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6840ecf-39fa-4154-9e7e-75486123cdee -t 2000 00:16:49.947 [ 00:16:49.947 { 00:16:49.947 "name": "c6840ecf-39fa-4154-9e7e-75486123cdee", 00:16:49.947 "aliases": [ 00:16:49.947 "lvs/lvol" 00:16:49.947 ], 00:16:49.947 "product_name": "Logical Volume", 00:16:49.947 "block_size": 4096, 00:16:49.947 "num_blocks": 38912, 00:16:49.947 "uuid": "c6840ecf-39fa-4154-9e7e-75486123cdee", 00:16:49.947 "assigned_rate_limits": { 00:16:49.947 "rw_ios_per_sec": 0, 00:16:49.947 "rw_mbytes_per_sec": 0, 00:16:49.947 "r_mbytes_per_sec": 0, 00:16:49.947 "w_mbytes_per_sec": 0 00:16:49.947 }, 00:16:49.947 "claimed": false, 00:16:49.947 "zoned": false, 00:16:49.947 "supported_io_types": { 00:16:49.947 "read": true, 00:16:49.947 "write": true, 00:16:49.947 "unmap": true, 00:16:49.947 "write_zeroes": true, 00:16:49.947 "flush": false, 00:16:49.947 "reset": true, 00:16:49.947 "compare": false, 00:16:49.947 "compare_and_write": false, 00:16:49.947 "abort": false, 00:16:49.947 "nvme_admin": false, 00:16:49.947 "nvme_io": false 00:16:49.947 }, 00:16:49.947 "driver_specific": { 00:16:49.947 "lvol": { 00:16:49.947 "lvol_store_uuid": "42d3273e-63e8-4c5e-a43e-764ca6adf326", 00:16:49.947 "base_bdev": "aio_bdev", 00:16:49.947 "thin_provision": false, 00:16:49.947 "snapshot": false, 00:16:49.947 "clone": false, 00:16:49.947 "esnap_clone": false 00:16:49.947 } 00:16:49.947 } 00:16:49.947 } 00:16:49.947 ] 00:16:49.947 21:12:28 -- common/autotest_common.sh@895 -- # return 0 00:16:49.947 21:12:28 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:49.947 21:12:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:50.208 21:12:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:50.208 21:12:28 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:50.208 21:12:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:50.468 21:12:28 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:50.468 21:12:28 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6840ecf-39fa-4154-9e7e-75486123cdee 00:16:50.468 21:12:28 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42d3273e-63e8-4c5e-a43e-764ca6adf326 00:16:50.729 21:12:28 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:50.729 21:12:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.729 00:16:50.729 real 0m14.995s 00:16:50.729 user 0m14.724s 00:16:50.729 sys 0m1.252s 00:16:50.729 21:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.729 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.729 ************************************ 00:16:50.729 END TEST lvs_grow_clean 00:16:50.729 ************************************ 00:16:50.729 21:12:28 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:50.990 21:12:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:50.990 21:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:50.990 21:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.990 ************************************ 00:16:50.990 START TEST lvs_grow_dirty 00:16:50.990 ************************************ 00:16:50.990 21:12:28 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.990 21:12:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.990 21:12:29 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:50.990 21:12:29 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:51.250 21:12:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:16:51.251 21:12:29 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:16:51.251 21:12:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:51.251 21:12:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:51.251 21:12:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:51.251 21:12:29 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 lvol 150 00:16:51.511 21:12:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:16:51.512 21:12:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.512 21:12:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:51.772 [2024-06-08 21:12:29.604852] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:51.772 [2024-06-08 21:12:29.604903] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:51.772 true 00:16:51.772 21:12:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:16:51.772 21:12:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:51.772 21:12:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:51.772 21:12:29 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:52.032 21:12:29 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:16:52.032 21:12:30 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.293 21:12:30 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.293 21:12:30 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:52.293 21:12:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2341745 00:16:52.293 21:12:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.293 21:12:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2341745 /var/tmp/bdevperf.sock 00:16:52.293 21:12:30 -- common/autotest_common.sh@819 -- # '[' -z 2341745 ']' 00:16:52.293 21:12:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.293 21:12:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.293 21:12:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.293 21:12:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.293 21:12:30 -- common/autotest_common.sh@10 -- # set +x 00:16:52.554 [2024-06-08 21:12:30.399181] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:52.554 [2024-06-08 21:12:30.399233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341745 ] 00:16:52.554 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.554 [2024-06-08 21:12:30.473807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.554 [2024-06-08 21:12:30.526610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.126 21:12:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.126 21:12:31 -- common/autotest_common.sh@852 -- # return 0 00:16:53.126 21:12:31 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:53.698 Nvme0n1 00:16:53.698 21:12:31 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:53.698 [ 00:16:53.698 { 00:16:53.698 "name": "Nvme0n1", 00:16:53.698 "aliases": [ 00:16:53.698 "7eee8de8-5410-4b59-ac55-ffc22b8e6141" 00:16:53.698 ], 00:16:53.698 "product_name": "NVMe disk", 00:16:53.698 "block_size": 4096, 00:16:53.698 "num_blocks": 38912, 00:16:53.698 "uuid": "7eee8de8-5410-4b59-ac55-ffc22b8e6141", 00:16:53.698 "assigned_rate_limits": { 00:16:53.698 "rw_ios_per_sec": 0, 00:16:53.698 "rw_mbytes_per_sec": 0, 00:16:53.698 "r_mbytes_per_sec": 0, 00:16:53.698 "w_mbytes_per_sec": 0 00:16:53.698 }, 00:16:53.698 "claimed": false, 00:16:53.698 "zoned": false, 00:16:53.698 "supported_io_types": { 00:16:53.698 "read": true, 00:16:53.698 "write": true, 00:16:53.698 "unmap": true, 00:16:53.698 "write_zeroes": true, 00:16:53.698 "flush": true, 00:16:53.698 "reset": true, 00:16:53.698 "compare": true, 00:16:53.698 "compare_and_write": true, 00:16:53.698 "abort": true, 00:16:53.698 "nvme_admin": true, 00:16:53.698 "nvme_io": true 00:16:53.698 }, 00:16:53.698 "driver_specific": { 00:16:53.698 "nvme": [ 00:16:53.698 { 00:16:53.698 "trid": { 00:16:53.698 "trtype": "TCP", 00:16:53.698 "adrfam": "IPv4", 00:16:53.698 "traddr": "10.0.0.2", 00:16:53.698 "trsvcid": "4420", 00:16:53.698 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.698 }, 00:16:53.698 "ctrlr_data": { 00:16:53.698 "cntlid": 1, 00:16:53.698 "vendor_id": "0x8086", 00:16:53.698 "model_number": "SPDK bdev Controller", 00:16:53.698 "serial_number": "SPDK0", 00:16:53.698 "firmware_revision": "24.01.1", 00:16:53.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.698 "oacs": { 00:16:53.698 "security": 0, 00:16:53.698 "format": 0, 00:16:53.698 "firmware": 0, 00:16:53.698 "ns_manage": 0 00:16:53.698 }, 00:16:53.698 "multi_ctrlr": true, 00:16:53.698 "ana_reporting": false 00:16:53.698 }, 00:16:53.698 "vs": { 00:16:53.698 "nvme_version": "1.3" 00:16:53.698 }, 00:16:53.698 "ns_data": { 00:16:53.698 "id": 1, 00:16:53.698 "can_share": true 00:16:53.698 } 00:16:53.698 } 00:16:53.698 ], 00:16:53.698 "mp_policy": "active_passive" 00:16:53.698 } 00:16:53.698 } 00:16:53.698 ] 00:16:53.698 21:12:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2342093 00:16:53.698 21:12:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:53.698 21:12:31 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.698 Running I/O for 10 seconds... 00:16:55.082 Latency(us) 00:16:55.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.082 Nvme0n1 : 1.00 18154.00 70.91 0.00 0.00 0.00 0.00 0.00 00:16:55.082 =================================================================================================================== 00:16:55.082 Total : 18154.00 70.91 0.00 0.00 0.00 0.00 0.00 00:16:55.082 00:16:55.654 21:12:33 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:16:55.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.914 Nvme0n1 : 2.00 18318.50 71.56 0.00 0.00 0.00 0.00 0.00 00:16:55.914 =================================================================================================================== 00:16:55.914 Total : 18318.50 71.56 0.00 0.00 0.00 0.00 0.00 00:16:55.914 00:16:55.914 true 00:16:55.914 21:12:33 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:16:55.914 21:12:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:56.174 21:12:34 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:56.174 21:12:34 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:56.174 21:12:34 -- target/nvmf_lvs_grow.sh@65 -- # wait 2342093 00:16:56.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.746 Nvme0n1 : 3.00 18387.67 71.83 0.00 0.00 0.00 0.00 0.00 00:16:56.746 =================================================================================================================== 00:16:56.746 Total : 18387.67 71.83 0.00 0.00 0.00 0.00 0.00 00:16:56.746 00:16:58.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.133 Nvme0n1 : 4.00 18439.25 72.03 0.00 0.00 0.00 0.00 0.00 00:16:58.133 =================================================================================================================== 00:16:58.133 Total : 18439.25 72.03 0.00 0.00 0.00 0.00 0.00 00:16:58.133 00:16:58.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.757 Nvme0n1 : 5.00 18463.40 72.12 0.00 0.00 0.00 0.00 0.00 00:16:58.757 =================================================================================================================== 00:16:58.757 Total : 18463.40 72.12 0.00 0.00 0.00 0.00 0.00 00:16:58.757 00:16:59.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.698 Nvme0n1 : 6.00 18492.33 72.24 0.00 0.00 0.00 0.00 0.00 00:16:59.698 =================================================================================================================== 00:16:59.698 Total : 18492.33 72.24 0.00 0.00 0.00 0.00 0.00 00:16:59.698 00:17:01.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.084 Nvme0n1 : 7.00 18509.29 72.30 0.00 0.00 0.00 0.00 0.00 00:17:01.084 =================================================================================================================== 00:17:01.084 Total : 18509.29 72.30 0.00 0.00 0.00 0.00 0.00 00:17:01.084 00:17:02.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.027 Nvme0n1 : 8.00 18531.50 72.39 0.00 0.00 0.00 0.00 0.00 00:17:02.028 =================================================================================================================== 00:17:02.028 Total : 18531.50 72.39 0.00 0.00 0.00 0.00 0.00 00:17:02.028 00:17:02.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.970 Nvme0n1 : 9.00 18543.33 72.43 0.00 0.00 0.00 0.00 0.00 00:17:02.970 =================================================================================================================== 00:17:02.970 Total : 18543.33 72.43 0.00 0.00 0.00 0.00 0.00 00:17:02.970 00:17:03.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.914 Nvme0n1 : 10.00 18553.40 72.47 0.00 0.00 0.00 0.00 0.00 00:17:03.914 =================================================================================================================== 00:17:03.914 Total : 18553.40 72.47 0.00 0.00 0.00 0.00 0.00 00:17:03.914 00:17:03.914 00:17:03.914 Latency(us) 00:17:03.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.914 Nvme0n1 : 10.00 18557.14 72.49 0.00 0.00 6893.85 4669.44 21080.75 00:17:03.914 =================================================================================================================== 00:17:03.914 Total : 18557.14 72.49 0.00 0.00 6893.85 4669.44 21080.75 00:17:03.914 0 00:17:03.914 21:12:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2341745 00:17:03.914 21:12:41 -- common/autotest_common.sh@926 -- # '[' -z 2341745 ']' 00:17:03.914 21:12:41 -- common/autotest_common.sh@930 -- # kill -0 2341745 00:17:03.914 21:12:41 -- common/autotest_common.sh@931 -- # uname 00:17:03.914 21:12:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.914 21:12:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2341745 00:17:03.914 21:12:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:03.914 21:12:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:03.914 21:12:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2341745' 00:17:03.914 killing process with pid 2341745 00:17:03.914 21:12:41 -- common/autotest_common.sh@945 -- # kill 2341745 00:17:03.914 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.914 00:17:03.914 Latency(us) 00:17:03.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.914 =================================================================================================================== 00:17:03.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.914 21:12:41 -- common/autotest_common.sh@950 -- # wait 2341745 00:17:03.914 21:12:41 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:04.175 21:12:42 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:04.175 21:12:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2338249 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@74 -- # wait 2338249 00:17:04.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2338249 Killed "${NVMF_APP[@]}" "$@" 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:04.436 21:12:42 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:04.436 21:12:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.436 21:12:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:04.436 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.436 21:12:42 -- nvmf/common.sh@469 -- # nvmfpid=2344132 00:17:04.436 21:12:42 -- nvmf/common.sh@470 -- # waitforlisten 2344132 00:17:04.436 21:12:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.436 21:12:42 -- common/autotest_common.sh@819 -- # '[' -z 2344132 ']' 00:17:04.436 21:12:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.436 21:12:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.436 21:12:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.436 21:12:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.436 21:12:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.436 [2024-06-08 21:12:42.431781] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:04.436 [2024-06-08 21:12:42.431832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.436 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.436 [2024-06-08 21:12:42.496707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.697 [2024-06-08 21:12:42.559941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.697 [2024-06-08 21:12:42.560055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.697 [2024-06-08 21:12:42.560063] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.697 [2024-06-08 21:12:42.560070] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.697 [2024-06-08 21:12:42.560090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.269 21:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:05.269 21:12:43 -- common/autotest_common.sh@852 -- # return 0 00:17:05.269 21:12:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.269 21:12:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:05.269 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:05.269 21:12:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.269 21:12:43 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:05.530 [2024-06-08 21:12:43.364901] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:05.530 [2024-06-08 21:12:43.364989] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:05.530 [2024-06-08 21:12:43.365018] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:05.530 21:12:43 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:05.530 21:12:43 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:17:05.530 21:12:43 -- common/autotest_common.sh@887 -- # local bdev_name=7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:17:05.530 21:12:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:05.530 21:12:43 -- common/autotest_common.sh@889 -- # local i 00:17:05.530 21:12:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:05.530 21:12:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:05.530 21:12:43 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:05.530 21:12:43 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7eee8de8-5410-4b59-ac55-ffc22b8e6141 -t 2000 00:17:05.791 [ 00:17:05.791 { 00:17:05.791 "name": "7eee8de8-5410-4b59-ac55-ffc22b8e6141", 00:17:05.791 "aliases": [ 00:17:05.791 "lvs/lvol" 00:17:05.791 ], 00:17:05.791 "product_name": "Logical Volume", 00:17:05.791 "block_size": 4096, 00:17:05.791 "num_blocks": 38912, 00:17:05.791 "uuid": "7eee8de8-5410-4b59-ac55-ffc22b8e6141", 00:17:05.791 "assigned_rate_limits": { 00:17:05.791 "rw_ios_per_sec": 0, 00:17:05.791 "rw_mbytes_per_sec": 0, 00:17:05.791 "r_mbytes_per_sec": 0, 00:17:05.791 "w_mbytes_per_sec": 0 00:17:05.791 }, 00:17:05.791 "claimed": false, 00:17:05.791 "zoned": false, 00:17:05.791 "supported_io_types": { 00:17:05.791 "read": true, 00:17:05.791 "write": true, 00:17:05.791 "unmap": true, 00:17:05.791 "write_zeroes": true, 00:17:05.791 "flush": false, 00:17:05.791 "reset": true, 00:17:05.791 "compare": false, 00:17:05.791 "compare_and_write": false, 00:17:05.791 "abort": false, 00:17:05.791 "nvme_admin": false, 00:17:05.791 "nvme_io": false 00:17:05.791 }, 00:17:05.791 "driver_specific": { 00:17:05.791 "lvol": { 00:17:05.791 "lvol_store_uuid": "a172e89c-99ca-4a4a-b4c6-59db84d2e0f5", 00:17:05.791 "base_bdev": "aio_bdev", 00:17:05.791 "thin_provision": false, 00:17:05.791 "snapshot": false, 00:17:05.791 "clone": false, 00:17:05.791 "esnap_clone": false 00:17:05.791 } 00:17:05.791 } 00:17:05.791 } 00:17:05.791 ] 00:17:05.791 21:12:43 -- common/autotest_common.sh@895 -- # return 0 00:17:05.791 21:12:43 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:05.791 21:12:43 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:05.791 21:12:43 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:05.791 21:12:43 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:05.791 21:12:43 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:06.052 21:12:43 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:06.052 21:12:43 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.052 [2024-06-08 21:12:44.120837] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:06.313 21:12:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:06.313 21:12:44 -- common/autotest_common.sh@640 -- # local es=0 00:17:06.313 21:12:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:06.313 21:12:44 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.313 21:12:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.313 21:12:44 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.313 21:12:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.313 21:12:44 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.313 21:12:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.313 21:12:44 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.313 21:12:44 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:06.313 21:12:44 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:06.313 request: 00:17:06.313 { 00:17:06.313 "uuid": "a172e89c-99ca-4a4a-b4c6-59db84d2e0f5", 00:17:06.313 "method": "bdev_lvol_get_lvstores", 00:17:06.313 "req_id": 1 00:17:06.313 } 00:17:06.313 Got JSON-RPC error response 00:17:06.313 response: 00:17:06.313 { 00:17:06.313 "code": -19, 00:17:06.313 "message": "No such device" 00:17:06.313 } 00:17:06.313 21:12:44 -- common/autotest_common.sh@643 -- # es=1 00:17:06.313 21:12:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.313 21:12:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.313 21:12:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.313 21:12:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.574 aio_bdev 00:17:06.574 21:12:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:17:06.574 21:12:44 -- common/autotest_common.sh@887 -- # local bdev_name=7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:17:06.574 21:12:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:06.574 21:12:44 -- common/autotest_common.sh@889 -- # local i 00:17:06.574 21:12:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:06.574 21:12:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:06.574 21:12:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:06.574 21:12:44 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7eee8de8-5410-4b59-ac55-ffc22b8e6141 -t 2000 00:17:06.835 [ 00:17:06.835 { 00:17:06.835 "name": "7eee8de8-5410-4b59-ac55-ffc22b8e6141", 00:17:06.835 "aliases": [ 00:17:06.835 "lvs/lvol" 00:17:06.835 ], 00:17:06.835 "product_name": "Logical Volume", 00:17:06.835 "block_size": 4096, 00:17:06.835 "num_blocks": 38912, 00:17:06.835 "uuid": "7eee8de8-5410-4b59-ac55-ffc22b8e6141", 00:17:06.835 "assigned_rate_limits": { 00:17:06.835 "rw_ios_per_sec": 0, 00:17:06.835 "rw_mbytes_per_sec": 0, 00:17:06.835 "r_mbytes_per_sec": 0, 00:17:06.835 "w_mbytes_per_sec": 0 00:17:06.835 }, 00:17:06.835 "claimed": false, 00:17:06.835 "zoned": false, 00:17:06.835 "supported_io_types": { 00:17:06.835 "read": true, 00:17:06.835 "write": true, 00:17:06.835 "unmap": true, 00:17:06.835 "write_zeroes": true, 00:17:06.835 "flush": false, 00:17:06.835 "reset": true, 00:17:06.835 "compare": false, 00:17:06.836 "compare_and_write": false, 00:17:06.836 "abort": false, 00:17:06.836 "nvme_admin": false, 00:17:06.836 "nvme_io": false 00:17:06.836 }, 00:17:06.836 "driver_specific": { 00:17:06.836 "lvol": { 00:17:06.836 "lvol_store_uuid": "a172e89c-99ca-4a4a-b4c6-59db84d2e0f5", 00:17:06.836 "base_bdev": "aio_bdev", 00:17:06.836 "thin_provision": false, 00:17:06.836 "snapshot": false, 00:17:06.836 "clone": false, 00:17:06.836 "esnap_clone": false 00:17:06.836 } 00:17:06.836 } 00:17:06.836 } 00:17:06.836 ] 00:17:06.836 21:12:44 -- common/autotest_common.sh@895 -- # return 0 00:17:06.836 21:12:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:06.836 21:12:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:07.096 21:12:44 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:07.096 21:12:44 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:07.096 21:12:44 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:07.096 21:12:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:07.096 21:12:45 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7eee8de8-5410-4b59-ac55-ffc22b8e6141 00:17:07.356 21:12:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a172e89c-99ca-4a4a-b4c6-59db84d2e0f5 00:17:07.617 21:12:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:07.617 21:12:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.617 00:17:07.617 real 0m16.789s 00:17:07.617 user 0m43.627s 00:17:07.617 sys 0m2.944s 00:17:07.617 21:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.617 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 ************************************ 00:17:07.617 END TEST lvs_grow_dirty 00:17:07.617 ************************************ 00:17:07.617 21:12:45 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:07.617 21:12:45 -- common/autotest_common.sh@796 -- # type=--id 00:17:07.617 21:12:45 -- common/autotest_common.sh@797 -- # id=0 00:17:07.617 21:12:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:07.617 21:12:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:07.617 21:12:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:07.617 21:12:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:07.617 21:12:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:07.617 21:12:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:07.617 nvmf_trace.0 00:17:07.617 21:12:45 -- common/autotest_common.sh@811 -- # return 0 00:17:07.617 21:12:45 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:07.617 21:12:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:07.617 21:12:45 -- nvmf/common.sh@116 -- # sync 00:17:07.617 21:12:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:07.617 21:12:45 -- nvmf/common.sh@119 -- # set +e 00:17:07.617 21:12:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:07.617 21:12:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:07.877 rmmod nvme_tcp 00:17:07.877 rmmod nvme_fabrics 00:17:07.877 rmmod nvme_keyring 00:17:07.877 21:12:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:07.877 21:12:45 -- nvmf/common.sh@123 -- # set -e 00:17:07.877 21:12:45 -- nvmf/common.sh@124 -- # return 0 00:17:07.877 21:12:45 -- nvmf/common.sh@477 -- # '[' -n 2344132 ']' 00:17:07.877 21:12:45 -- nvmf/common.sh@478 -- # killprocess 2344132 00:17:07.877 21:12:45 -- common/autotest_common.sh@926 -- # '[' -z 2344132 ']' 00:17:07.877 21:12:45 -- common/autotest_common.sh@930 -- # kill -0 2344132 00:17:07.877 21:12:45 -- common/autotest_common.sh@931 -- # uname 00:17:07.877 21:12:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.877 21:12:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2344132 00:17:07.877 21:12:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:07.877 21:12:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:07.877 21:12:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2344132' 00:17:07.877 killing process with pid 2344132 00:17:07.877 21:12:45 -- common/autotest_common.sh@945 -- # kill 2344132 00:17:07.877 21:12:45 -- common/autotest_common.sh@950 -- # wait 2344132 00:17:07.877 21:12:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.877 21:12:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:07.877 21:12:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:07.877 21:12:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.877 21:12:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:07.877 21:12:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.877 21:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.877 21:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.423 21:12:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:10.423 00:17:10.423 real 0m42.482s 00:17:10.423 user 1m4.191s 00:17:10.423 sys 0m9.853s 00:17:10.423 21:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.423 21:12:48 -- common/autotest_common.sh@10 -- # set +x 00:17:10.423 ************************************ 00:17:10.423 END TEST nvmf_lvs_grow 00:17:10.423 ************************************ 00:17:10.423 21:12:48 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:10.423 21:12:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:10.423 21:12:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:10.423 21:12:48 -- common/autotest_common.sh@10 -- # set +x 00:17:10.423 ************************************ 00:17:10.423 START TEST nvmf_bdev_io_wait 00:17:10.423 ************************************ 00:17:10.423 21:12:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:10.423 * Looking for test storage... 00:17:10.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.423 21:12:48 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.423 21:12:48 -- nvmf/common.sh@7 -- # uname -s 00:17:10.423 21:12:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.423 21:12:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.423 21:12:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.423 21:12:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.423 21:12:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.423 21:12:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.423 21:12:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.423 21:12:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.423 21:12:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.423 21:12:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.423 21:12:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.423 21:12:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.423 21:12:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.423 21:12:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.423 21:12:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.423 21:12:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.423 21:12:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.423 21:12:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.423 21:12:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.423 21:12:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.423 21:12:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.423 21:12:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.423 21:12:48 -- paths/export.sh@5 -- # export PATH 00:17:10.423 21:12:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.423 21:12:48 -- nvmf/common.sh@46 -- # : 0 00:17:10.423 21:12:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:10.423 21:12:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:10.423 21:12:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:10.423 21:12:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.423 21:12:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.423 21:12:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:10.423 21:12:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:10.423 21:12:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:10.423 21:12:48 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.423 21:12:48 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.423 21:12:48 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:10.423 21:12:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:10.423 21:12:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.423 21:12:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:10.423 21:12:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:10.423 21:12:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:10.423 21:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.423 21:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.423 21:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.423 21:12:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:10.423 21:12:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:10.423 21:12:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:10.423 21:12:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.012 21:12:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:17.012 21:12:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:17.012 21:12:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:17.012 21:12:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:17.012 21:12:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:17.012 21:12:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:17.012 21:12:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:17.012 21:12:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:17.012 21:12:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:17.012 21:12:54 -- nvmf/common.sh@295 -- # e810=() 00:17:17.012 21:12:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:17.012 21:12:54 -- nvmf/common.sh@296 -- # x722=() 00:17:17.012 21:12:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:17.012 21:12:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:17.012 21:12:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:17.012 21:12:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.012 21:12:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:17.012 21:12:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:17.012 21:12:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:17.012 21:12:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:17.012 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:17.012 21:12:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:17.012 21:12:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:17.012 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:17.012 21:12:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:17.012 21:12:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.012 21:12:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.012 21:12:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:17.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:17.012 21:12:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.012 21:12:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:17.012 21:12:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.012 21:12:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.012 21:12:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:17.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:17.012 21:12:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.012 21:12:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:17.012 21:12:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:17.012 21:12:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:17.012 21:12:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.012 21:12:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.012 21:12:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.012 21:12:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:17.012 21:12:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.012 21:12:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.012 21:12:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:17.012 21:12:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.012 21:12:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.012 21:12:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:17.012 21:12:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:17.012 21:12:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.012 21:12:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.274 21:12:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.274 21:12:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.274 21:12:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:17.274 21:12:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.274 21:12:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.274 21:12:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.274 21:12:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:17.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:17:17.274 00:17:17.274 --- 10.0.0.2 ping statistics --- 00:17:17.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.274 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:17:17.274 21:12:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:17:17.274 00:17:17.274 --- 10.0.0.1 ping statistics --- 00:17:17.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.274 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:17:17.274 21:12:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.274 21:12:55 -- nvmf/common.sh@410 -- # return 0 00:17:17.274 21:12:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:17.274 21:12:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.274 21:12:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:17.274 21:12:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:17.274 21:12:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.274 21:12:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:17.274 21:12:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:17.274 21:12:55 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:17.274 21:12:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:17.274 21:12:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:17.274 21:12:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 21:12:55 -- nvmf/common.sh@469 -- # nvmfpid=2349042 00:17:17.536 21:12:55 -- nvmf/common.sh@470 -- # waitforlisten 2349042 00:17:17.536 21:12:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:17.536 21:12:55 -- common/autotest_common.sh@819 -- # '[' -z 2349042 ']' 00:17:17.536 21:12:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.536 21:12:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.536 21:12:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.536 21:12:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.536 21:12:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 [2024-06-08 21:12:55.425325] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:17.536 [2024-06-08 21:12:55.425390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.536 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.536 [2024-06-08 21:12:55.495094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.536 [2024-06-08 21:12:55.572251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:17.536 [2024-06-08 21:12:55.572388] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.536 [2024-06-08 21:12:55.572398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.536 [2024-06-08 21:12:55.572413] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.536 [2024-06-08 21:12:55.572534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.536 [2024-06-08 21:12:55.572645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.536 [2024-06-08 21:12:55.572781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.536 [2024-06-08 21:12:55.572782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.509 21:12:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.509 21:12:56 -- common/autotest_common.sh@852 -- # return 0 00:17:18.509 21:12:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:18.509 21:12:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 21:12:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 [2024-06-08 21:12:56.304565] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 Malloc0 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.509 21:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.509 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:18.509 [2024-06-08 21:12:56.382750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.509 21:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2349243 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@30 -- # READ_PID=2349245 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # config=() 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:18.509 21:12:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:18.509 { 00:17:18.509 "params": { 00:17:18.509 "name": "Nvme$subsystem", 00:17:18.509 "trtype": "$TEST_TRANSPORT", 00:17:18.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.509 "adrfam": "ipv4", 00:17:18.509 "trsvcid": "$NVMF_PORT", 00:17:18.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.509 "hdgst": ${hdgst:-false}, 00:17:18.509 "ddgst": ${ddgst:-false} 00:17:18.509 }, 00:17:18.509 "method": "bdev_nvme_attach_controller" 00:17:18.509 } 00:17:18.509 EOF 00:17:18.509 )") 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2349247 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # config=() 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:18.509 21:12:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:18.509 { 00:17:18.509 "params": { 00:17:18.509 "name": "Nvme$subsystem", 00:17:18.509 "trtype": "$TEST_TRANSPORT", 00:17:18.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.509 "adrfam": "ipv4", 00:17:18.509 "trsvcid": "$NVMF_PORT", 00:17:18.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.509 "hdgst": ${hdgst:-false}, 00:17:18.509 "ddgst": ${ddgst:-false} 00:17:18.509 }, 00:17:18.509 "method": "bdev_nvme_attach_controller" 00:17:18.509 } 00:17:18.509 EOF 00:17:18.509 )") 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2349250 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@35 -- # sync 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # config=() 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # cat 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:18.509 21:12:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:18.509 { 00:17:18.509 "params": { 00:17:18.509 "name": "Nvme$subsystem", 00:17:18.509 "trtype": "$TEST_TRANSPORT", 00:17:18.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.509 "adrfam": "ipv4", 00:17:18.509 "trsvcid": "$NVMF_PORT", 00:17:18.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.509 "hdgst": ${hdgst:-false}, 00:17:18.509 "ddgst": ${ddgst:-false} 00:17:18.509 }, 00:17:18.509 "method": "bdev_nvme_attach_controller" 00:17:18.509 } 00:17:18.509 EOF 00:17:18.509 )") 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:18.509 21:12:56 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # config=() 00:17:18.509 21:12:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # cat 00:17:18.509 21:12:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:18.509 21:12:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:18.509 { 00:17:18.509 "params": { 00:17:18.509 "name": "Nvme$subsystem", 00:17:18.509 "trtype": "$TEST_TRANSPORT", 00:17:18.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.510 "adrfam": "ipv4", 00:17:18.510 "trsvcid": "$NVMF_PORT", 00:17:18.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.510 "hdgst": ${hdgst:-false}, 00:17:18.510 "ddgst": ${ddgst:-false} 00:17:18.510 }, 00:17:18.510 "method": "bdev_nvme_attach_controller" 00:17:18.510 } 00:17:18.510 EOF 00:17:18.510 )") 00:17:18.510 21:12:56 -- nvmf/common.sh@542 -- # cat 00:17:18.510 21:12:56 -- target/bdev_io_wait.sh@37 -- # wait 2349243 00:17:18.510 21:12:56 -- nvmf/common.sh@542 -- # cat 00:17:18.510 21:12:56 -- nvmf/common.sh@544 -- # jq . 00:17:18.510 21:12:56 -- nvmf/common.sh@544 -- # jq . 00:17:18.510 21:12:56 -- nvmf/common.sh@544 -- # jq . 00:17:18.510 21:12:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:18.510 21:12:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:18.510 "params": { 00:17:18.510 "name": "Nvme1", 00:17:18.510 "trtype": "tcp", 00:17:18.510 "traddr": "10.0.0.2", 00:17:18.510 "adrfam": "ipv4", 00:17:18.510 "trsvcid": "4420", 00:17:18.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.510 "hdgst": false, 00:17:18.510 "ddgst": false 00:17:18.510 }, 00:17:18.510 "method": "bdev_nvme_attach_controller" 00:17:18.510 }' 00:17:18.510 21:12:56 -- nvmf/common.sh@544 -- # jq . 00:17:18.510 21:12:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:18.510 21:12:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:18.510 "params": { 00:17:18.510 "name": "Nvme1", 00:17:18.510 "trtype": "tcp", 00:17:18.510 "traddr": "10.0.0.2", 00:17:18.510 "adrfam": "ipv4", 00:17:18.510 "trsvcid": "4420", 00:17:18.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.510 "hdgst": false, 00:17:18.510 "ddgst": false 00:17:18.510 }, 00:17:18.510 "method": "bdev_nvme_attach_controller" 00:17:18.510 }' 00:17:18.510 21:12:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:18.510 21:12:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:18.510 "params": { 00:17:18.510 "name": "Nvme1", 00:17:18.510 "trtype": "tcp", 00:17:18.510 "traddr": "10.0.0.2", 00:17:18.510 "adrfam": "ipv4", 00:17:18.510 "trsvcid": "4420", 00:17:18.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.510 "hdgst": false, 00:17:18.510 "ddgst": false 00:17:18.510 }, 00:17:18.510 "method": "bdev_nvme_attach_controller" 00:17:18.510 }' 00:17:18.510 21:12:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:18.510 21:12:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:18.510 "params": { 00:17:18.510 "name": "Nvme1", 00:17:18.510 "trtype": "tcp", 00:17:18.510 "traddr": "10.0.0.2", 00:17:18.510 "adrfam": "ipv4", 00:17:18.510 "trsvcid": "4420", 00:17:18.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.510 "hdgst": false, 00:17:18.510 "ddgst": false 00:17:18.510 }, 00:17:18.510 "method": "bdev_nvme_attach_controller" 00:17:18.510 }' 00:17:18.510 [2024-06-08 21:12:56.433010] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:18.510 [2024-06-08 21:12:56.433009] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:18.510 [2024-06-08 21:12:56.433062] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-06-08 21:12:56.433062] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:18.510 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:18.510 [2024-06-08 21:12:56.433943] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:18.510 [2024-06-08 21:12:56.433985] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:18.510 [2024-06-08 21:12:56.436407] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:18.510 [2024-06-08 21:12:56.436453] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:18.510 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.510 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.510 [2024-06-08 21:12:56.576416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.771 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.771 [2024-06-08 21:12:56.624863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.771 [2024-06-08 21:12:56.636046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.771 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.771 [2024-06-08 21:12:56.684780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:18.771 [2024-06-08 21:12:56.696878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.771 [2024-06-08 21:12:56.741352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.771 [2024-06-08 21:12:56.746647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.771 [2024-06-08 21:12:56.791146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.771 Running I/O for 1 seconds... 00:17:19.035 Running I/O for 1 seconds... 00:17:19.035 Running I/O for 1 seconds... 00:17:19.035 Running I/O for 1 seconds... 00:17:19.977 00:17:19.977 Latency(us) 00:17:19.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.977 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:19.977 Nvme1n1 : 1.00 14627.54 57.14 0.00 0.00 8725.90 4587.52 16165.55 00:17:19.977 =================================================================================================================== 00:17:19.977 Total : 14627.54 57.14 0.00 0.00 8725.90 4587.52 16165.55 00:17:19.977 00:17:19.977 Latency(us) 00:17:19.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.977 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:19.977 Nvme1n1 : 1.01 11757.88 45.93 0.00 0.00 10852.01 5515.95 25231.36 00:17:19.977 =================================================================================================================== 00:17:19.977 Total : 11757.88 45.93 0.00 0.00 10852.01 5515.95 25231.36 00:17:19.977 00:17:19.977 Latency(us) 00:17:19.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.977 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:19.977 Nvme1n1 : 1.00 18543.33 72.43 0.00 0.00 6886.19 2703.36 11141.12 00:17:19.977 =================================================================================================================== 00:17:19.977 Total : 18543.33 72.43 0.00 0.00 6886.19 2703.36 11141.12 00:17:19.977 21:12:58 -- target/bdev_io_wait.sh@38 -- # wait 2349245 00:17:20.238 00:17:20.238 Latency(us) 00:17:20.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.238 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:20.238 Nvme1n1 : 1.00 191744.53 749.00 0.00 0.00 665.04 262.83 740.69 00:17:20.238 =================================================================================================================== 00:17:20.238 Total : 191744.53 749.00 0.00 0.00 665.04 262.83 740.69 00:17:20.238 21:12:58 -- target/bdev_io_wait.sh@39 -- # wait 2349247 00:17:20.238 21:12:58 -- target/bdev_io_wait.sh@40 -- # wait 2349250 00:17:20.239 21:12:58 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.239 21:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:20.239 21:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:20.239 21:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:20.239 21:12:58 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:20.239 21:12:58 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:20.239 21:12:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:20.239 21:12:58 -- nvmf/common.sh@116 -- # sync 00:17:20.239 21:12:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:20.239 21:12:58 -- nvmf/common.sh@119 -- # set +e 00:17:20.239 21:12:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:20.239 21:12:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:20.239 rmmod nvme_tcp 00:17:20.239 rmmod nvme_fabrics 00:17:20.239 rmmod nvme_keyring 00:17:20.239 21:12:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:20.239 21:12:58 -- nvmf/common.sh@123 -- # set -e 00:17:20.239 21:12:58 -- nvmf/common.sh@124 -- # return 0 00:17:20.239 21:12:58 -- nvmf/common.sh@477 -- # '[' -n 2349042 ']' 00:17:20.239 21:12:58 -- nvmf/common.sh@478 -- # killprocess 2349042 00:17:20.239 21:12:58 -- common/autotest_common.sh@926 -- # '[' -z 2349042 ']' 00:17:20.239 21:12:58 -- common/autotest_common.sh@930 -- # kill -0 2349042 00:17:20.239 21:12:58 -- common/autotest_common.sh@931 -- # uname 00:17:20.239 21:12:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.239 21:12:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2349042 00:17:20.500 21:12:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:20.500 21:12:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:20.500 21:12:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2349042' 00:17:20.500 killing process with pid 2349042 00:17:20.500 21:12:58 -- common/autotest_common.sh@945 -- # kill 2349042 00:17:20.500 21:12:58 -- common/autotest_common.sh@950 -- # wait 2349042 00:17:20.500 21:12:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:20.500 21:12:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:20.500 21:12:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:20.500 21:12:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.500 21:12:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:20.500 21:12:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.500 21:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.500 21:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.053 21:13:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:23.053 00:17:23.053 real 0m12.500s 00:17:23.053 user 0m18.975s 00:17:23.053 sys 0m6.704s 00:17:23.053 21:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.053 21:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:23.053 ************************************ 00:17:23.053 END TEST nvmf_bdev_io_wait 00:17:23.053 ************************************ 00:17:23.053 21:13:00 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:23.053 21:13:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:23.053 21:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:23.053 21:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:23.053 ************************************ 00:17:23.053 START TEST nvmf_queue_depth 00:17:23.053 ************************************ 00:17:23.053 21:13:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:23.053 * Looking for test storage... 00:17:23.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.053 21:13:00 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.053 21:13:00 -- nvmf/common.sh@7 -- # uname -s 00:17:23.053 21:13:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.053 21:13:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.053 21:13:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.053 21:13:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.053 21:13:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.053 21:13:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.053 21:13:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.053 21:13:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.053 21:13:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.053 21:13:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.053 21:13:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.053 21:13:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.053 21:13:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.053 21:13:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.053 21:13:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.053 21:13:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.053 21:13:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.053 21:13:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.053 21:13:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.053 21:13:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.053 21:13:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.053 21:13:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.053 21:13:00 -- paths/export.sh@5 -- # export PATH 00:17:23.053 21:13:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.053 21:13:00 -- nvmf/common.sh@46 -- # : 0 00:17:23.053 21:13:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:23.053 21:13:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:23.053 21:13:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:23.053 21:13:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.053 21:13:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.053 21:13:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:23.053 21:13:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:23.053 21:13:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:23.053 21:13:00 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:23.053 21:13:00 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:23.053 21:13:00 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.053 21:13:00 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:23.053 21:13:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:23.053 21:13:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.053 21:13:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:23.053 21:13:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:23.053 21:13:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:23.053 21:13:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.053 21:13:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.053 21:13:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.053 21:13:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:23.053 21:13:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:23.053 21:13:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:23.053 21:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:29.646 21:13:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.646 21:13:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:29.646 21:13:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:29.646 21:13:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:29.646 21:13:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:29.646 21:13:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:29.646 21:13:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:29.646 21:13:07 -- nvmf/common.sh@294 -- # net_devs=() 00:17:29.646 21:13:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:29.646 21:13:07 -- nvmf/common.sh@295 -- # e810=() 00:17:29.646 21:13:07 -- nvmf/common.sh@295 -- # local -ga e810 00:17:29.646 21:13:07 -- nvmf/common.sh@296 -- # x722=() 00:17:29.646 21:13:07 -- nvmf/common.sh@296 -- # local -ga x722 00:17:29.646 21:13:07 -- nvmf/common.sh@297 -- # mlx=() 00:17:29.646 21:13:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:29.646 21:13:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.646 21:13:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:29.646 21:13:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:29.646 21:13:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:29.646 21:13:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:29.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:29.646 21:13:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:29.646 21:13:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:29.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:29.646 21:13:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:29.646 21:13:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.646 21:13:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.646 21:13:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:29.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:29.646 21:13:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.646 21:13:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:29.646 21:13:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.646 21:13:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.646 21:13:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:29.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:29.646 21:13:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.646 21:13:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:29.646 21:13:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:29.646 21:13:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:29.646 21:13:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.646 21:13:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.646 21:13:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.646 21:13:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:29.646 21:13:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.646 21:13:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.646 21:13:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:29.646 21:13:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.646 21:13:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.646 21:13:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:29.646 21:13:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:29.646 21:13:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.646 21:13:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.646 21:13:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.646 21:13:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.646 21:13:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:29.647 21:13:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.647 21:13:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.647 21:13:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.908 21:13:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:29.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:17:29.908 00:17:29.908 --- 10.0.0.2 ping statistics --- 00:17:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.908 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:17:29.908 21:13:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:17:29.908 00:17:29.908 --- 10.0.0.1 ping statistics --- 00:17:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.908 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:17:29.908 21:13:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.908 21:13:07 -- nvmf/common.sh@410 -- # return 0 00:17:29.908 21:13:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:29.908 21:13:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.908 21:13:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:29.908 21:13:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:29.908 21:13:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.908 21:13:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:29.908 21:13:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:29.908 21:13:07 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:29.908 21:13:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.908 21:13:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.908 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:17:29.908 21:13:07 -- nvmf/common.sh@469 -- # nvmfpid=2353800 00:17:29.908 21:13:07 -- nvmf/common.sh@470 -- # waitforlisten 2353800 00:17:29.908 21:13:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.908 21:13:07 -- common/autotest_common.sh@819 -- # '[' -z 2353800 ']' 00:17:29.908 21:13:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.908 21:13:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.908 21:13:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.908 21:13:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.908 21:13:07 -- common/autotest_common.sh@10 -- # set +x 00:17:29.908 [2024-06-08 21:13:07.849083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:29.908 [2024-06-08 21:13:07.849133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.908 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.908 [2024-06-08 21:13:07.899942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.908 [2024-06-08 21:13:07.974586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.908 [2024-06-08 21:13:07.974713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.908 [2024-06-08 21:13:07.974721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.908 [2024-06-08 21:13:07.974726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.908 [2024-06-08 21:13:07.974752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.852 21:13:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.852 21:13:08 -- common/autotest_common.sh@852 -- # return 0 00:17:30.852 21:13:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.852 21:13:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.852 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 21:13:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.852 21:13:08 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.852 21:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.852 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 [2024-06-08 21:13:08.751583] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.852 21:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.853 21:13:08 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:30.853 21:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.853 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 Malloc0 00:17:30.853 21:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.853 21:13:08 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.853 21:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.853 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 21:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.853 21:13:08 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.853 21:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.853 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 21:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.853 21:13:08 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.853 21:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:30.853 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 [2024-06-08 21:13:08.820597] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.853 21:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:30.853 21:13:08 -- target/queue_depth.sh@30 -- # bdevperf_pid=2353977 00:17:30.853 21:13:08 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.853 21:13:08 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:30.853 21:13:08 -- target/queue_depth.sh@33 -- # waitforlisten 2353977 /var/tmp/bdevperf.sock 00:17:30.853 21:13:08 -- common/autotest_common.sh@819 -- # '[' -z 2353977 ']' 00:17:30.853 21:13:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.853 21:13:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.853 21:13:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.853 21:13:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.853 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 [2024-06-08 21:13:08.879634] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:30.853 [2024-06-08 21:13:08.879717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353977 ] 00:17:30.853 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.853 [2024-06-08 21:13:08.942962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.113 [2024-06-08 21:13:09.014934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.683 21:13:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.684 21:13:09 -- common/autotest_common.sh@852 -- # return 0 00:17:31.684 21:13:09 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:31.684 21:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.684 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:17:31.944 NVMe0n1 00:17:31.944 21:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.944 21:13:09 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.944 Running I/O for 10 seconds... 00:17:41.943 00:17:41.943 Latency(us) 00:17:41.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:41.943 Verification LBA range: start 0x0 length 0x4000 00:17:41.943 NVMe0n1 : 10.04 18840.97 73.60 0.00 0.00 54193.66 10922.67 57671.68 00:17:41.943 =================================================================================================================== 00:17:41.943 Total : 18840.97 73.60 0.00 0.00 54193.66 10922.67 57671.68 00:17:41.943 0 00:17:41.943 21:13:20 -- target/queue_depth.sh@39 -- # killprocess 2353977 00:17:41.943 21:13:20 -- common/autotest_common.sh@926 -- # '[' -z 2353977 ']' 00:17:41.943 21:13:20 -- common/autotest_common.sh@930 -- # kill -0 2353977 00:17:41.943 21:13:20 -- common/autotest_common.sh@931 -- # uname 00:17:41.943 21:13:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.943 21:13:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2353977 00:17:42.205 21:13:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.205 21:13:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.205 21:13:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2353977' 00:17:42.205 killing process with pid 2353977 00:17:42.205 21:13:20 -- common/autotest_common.sh@945 -- # kill 2353977 00:17:42.205 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.205 00:17:42.205 Latency(us) 00:17:42.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.205 =================================================================================================================== 00:17:42.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.205 21:13:20 -- common/autotest_common.sh@950 -- # wait 2353977 00:17:42.205 21:13:20 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:42.205 21:13:20 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:42.205 21:13:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:42.205 21:13:20 -- nvmf/common.sh@116 -- # sync 00:17:42.205 21:13:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:42.205 21:13:20 -- nvmf/common.sh@119 -- # set +e 00:17:42.205 21:13:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:42.205 21:13:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:42.205 rmmod nvme_tcp 00:17:42.205 rmmod nvme_fabrics 00:17:42.205 rmmod nvme_keyring 00:17:42.205 21:13:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.205 21:13:20 -- nvmf/common.sh@123 -- # set -e 00:17:42.205 21:13:20 -- nvmf/common.sh@124 -- # return 0 00:17:42.205 21:13:20 -- nvmf/common.sh@477 -- # '[' -n 2353800 ']' 00:17:42.205 21:13:20 -- nvmf/common.sh@478 -- # killprocess 2353800 00:17:42.205 21:13:20 -- common/autotest_common.sh@926 -- # '[' -z 2353800 ']' 00:17:42.205 21:13:20 -- common/autotest_common.sh@930 -- # kill -0 2353800 00:17:42.205 21:13:20 -- common/autotest_common.sh@931 -- # uname 00:17:42.205 21:13:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.205 21:13:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2353800 00:17:42.466 21:13:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:42.466 21:13:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:42.466 21:13:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2353800' 00:17:42.466 killing process with pid 2353800 00:17:42.466 21:13:20 -- common/autotest_common.sh@945 -- # kill 2353800 00:17:42.466 21:13:20 -- common/autotest_common.sh@950 -- # wait 2353800 00:17:42.466 21:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.466 21:13:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.466 21:13:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.466 21:13:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.466 21:13:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.466 21:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.466 21:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.466 21:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.014 21:13:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.014 00:17:45.014 real 0m21.894s 00:17:45.014 user 0m25.694s 00:17:45.014 sys 0m6.364s 00:17:45.014 21:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.014 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:45.014 ************************************ 00:17:45.014 END TEST nvmf_queue_depth 00:17:45.014 ************************************ 00:17:45.014 21:13:22 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:45.014 21:13:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:45.014 21:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.014 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:45.014 ************************************ 00:17:45.014 START TEST nvmf_multipath 00:17:45.014 ************************************ 00:17:45.014 21:13:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:45.014 * Looking for test storage... 00:17:45.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.014 21:13:22 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.014 21:13:22 -- nvmf/common.sh@7 -- # uname -s 00:17:45.014 21:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.014 21:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.014 21:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.014 21:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.014 21:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.014 21:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.014 21:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.014 21:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.014 21:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.014 21:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.014 21:13:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.014 21:13:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.014 21:13:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.014 21:13:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.014 21:13:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.014 21:13:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.014 21:13:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.014 21:13:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.014 21:13:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.014 21:13:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.014 21:13:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.015 21:13:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.015 21:13:22 -- paths/export.sh@5 -- # export PATH 00:17:45.015 21:13:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.015 21:13:22 -- nvmf/common.sh@46 -- # : 0 00:17:45.015 21:13:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.015 21:13:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.015 21:13:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.015 21:13:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.015 21:13:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.015 21:13:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.015 21:13:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.015 21:13:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.015 21:13:22 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.015 21:13:22 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.015 21:13:22 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:45.015 21:13:22 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.015 21:13:22 -- target/multipath.sh@43 -- # nvmftestinit 00:17:45.015 21:13:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.015 21:13:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.015 21:13:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.015 21:13:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.015 21:13:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.015 21:13:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.015 21:13:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.015 21:13:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.015 21:13:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:45.015 21:13:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:45.015 21:13:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:45.015 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.650 21:13:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.650 21:13:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:51.650 21:13:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:51.650 21:13:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:51.650 21:13:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:51.650 21:13:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:51.650 21:13:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:51.650 21:13:29 -- nvmf/common.sh@294 -- # net_devs=() 00:17:51.650 21:13:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:51.650 21:13:29 -- nvmf/common.sh@295 -- # e810=() 00:17:51.650 21:13:29 -- nvmf/common.sh@295 -- # local -ga e810 00:17:51.650 21:13:29 -- nvmf/common.sh@296 -- # x722=() 00:17:51.650 21:13:29 -- nvmf/common.sh@296 -- # local -ga x722 00:17:51.650 21:13:29 -- nvmf/common.sh@297 -- # mlx=() 00:17:51.650 21:13:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:51.650 21:13:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.650 21:13:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.650 21:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:51.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:51.650 21:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.650 21:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:51.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:51.650 21:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.650 21:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.650 21:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.650 21:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:51.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:51.650 21:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.650 21:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.650 21:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.650 21:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:51.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:51.650 21:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:51.650 21:13:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:51.650 21:13:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.650 21:13:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.650 21:13:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:51.650 21:13:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.650 21:13:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.650 21:13:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:51.650 21:13:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.650 21:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.650 21:13:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:51.650 21:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:51.650 21:13:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.650 21:13:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.650 21:13:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.650 21:13:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.650 21:13:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:51.650 21:13:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.650 21:13:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.650 21:13:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.650 21:13:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:51.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:17:51.650 00:17:51.650 --- 10.0.0.2 ping statistics --- 00:17:51.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.650 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:17:51.650 21:13:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:17:51.650 00:17:51.650 --- 10.0.0.1 ping statistics --- 00:17:51.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.650 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:17:51.650 21:13:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.650 21:13:29 -- nvmf/common.sh@410 -- # return 0 00:17:51.650 21:13:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.650 21:13:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.650 21:13:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.650 21:13:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.650 21:13:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.650 21:13:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.650 21:13:29 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:51.650 21:13:29 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:51.650 only one NIC for nvmf test 00:17:51.650 21:13:29 -- target/multipath.sh@47 -- # nvmftestfini 00:17:51.650 21:13:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:51.650 21:13:29 -- nvmf/common.sh@116 -- # sync 00:17:51.650 21:13:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:51.650 21:13:29 -- nvmf/common.sh@119 -- # set +e 00:17:51.650 21:13:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:51.650 21:13:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:51.650 rmmod nvme_tcp 00:17:51.650 rmmod nvme_fabrics 00:17:51.650 rmmod nvme_keyring 00:17:51.650 21:13:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:51.912 21:13:29 -- nvmf/common.sh@123 -- # set -e 00:17:51.912 21:13:29 -- nvmf/common.sh@124 -- # return 0 00:17:51.912 21:13:29 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:51.912 21:13:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.912 21:13:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:51.912 21:13:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:51.912 21:13:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.912 21:13:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:51.912 21:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.912 21:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.912 21:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.828 21:13:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:53.828 21:13:31 -- target/multipath.sh@48 -- # exit 0 00:17:53.828 21:13:31 -- target/multipath.sh@1 -- # nvmftestfini 00:17:53.828 21:13:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:53.828 21:13:31 -- nvmf/common.sh@116 -- # sync 00:17:53.828 21:13:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:53.828 21:13:31 -- nvmf/common.sh@119 -- # set +e 00:17:53.828 21:13:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:53.828 21:13:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:53.828 21:13:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:53.828 21:13:31 -- nvmf/common.sh@123 -- # set -e 00:17:53.828 21:13:31 -- nvmf/common.sh@124 -- # return 0 00:17:53.828 21:13:31 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:53.828 21:13:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:53.828 21:13:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:53.828 21:13:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:53.828 21:13:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.828 21:13:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:53.828 21:13:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.828 21:13:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.828 21:13:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.828 21:13:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:53.828 00:17:53.828 real 0m9.306s 00:17:53.828 user 0m2.092s 00:17:53.828 sys 0m5.120s 00:17:53.828 21:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.828 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:53.828 ************************************ 00:17:53.828 END TEST nvmf_multipath 00:17:53.828 ************************************ 00:17:53.828 21:13:31 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:53.828 21:13:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:53.828 21:13:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:53.828 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:53.828 ************************************ 00:17:53.828 START TEST nvmf_zcopy 00:17:53.828 ************************************ 00:17:53.828 21:13:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:54.088 * Looking for test storage... 00:17:54.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.088 21:13:31 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.088 21:13:31 -- nvmf/common.sh@7 -- # uname -s 00:17:54.088 21:13:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.088 21:13:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.088 21:13:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.088 21:13:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.088 21:13:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.088 21:13:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.088 21:13:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.088 21:13:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.088 21:13:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.088 21:13:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.088 21:13:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.088 21:13:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.088 21:13:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.088 21:13:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.088 21:13:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.088 21:13:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.088 21:13:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.088 21:13:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.088 21:13:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.088 21:13:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.088 21:13:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.088 21:13:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.088 21:13:32 -- paths/export.sh@5 -- # export PATH 00:17:54.088 21:13:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.088 21:13:32 -- nvmf/common.sh@46 -- # : 0 00:17:54.088 21:13:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:54.088 21:13:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:54.088 21:13:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:54.088 21:13:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.088 21:13:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.088 21:13:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:54.088 21:13:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:54.088 21:13:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:54.088 21:13:32 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:54.088 21:13:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:54.088 21:13:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.088 21:13:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:54.088 21:13:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:54.088 21:13:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:54.088 21:13:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.088 21:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.088 21:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.088 21:13:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:54.088 21:13:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:54.088 21:13:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:54.088 21:13:32 -- common/autotest_common.sh@10 -- # set +x 00:18:00.672 21:13:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:00.672 21:13:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:00.672 21:13:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:00.672 21:13:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:00.672 21:13:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:00.932 21:13:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:00.932 21:13:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:00.932 21:13:38 -- nvmf/common.sh@294 -- # net_devs=() 00:18:00.932 21:13:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:00.932 21:13:38 -- nvmf/common.sh@295 -- # e810=() 00:18:00.932 21:13:38 -- nvmf/common.sh@295 -- # local -ga e810 00:18:00.932 21:13:38 -- nvmf/common.sh@296 -- # x722=() 00:18:00.932 21:13:38 -- nvmf/common.sh@296 -- # local -ga x722 00:18:00.932 21:13:38 -- nvmf/common.sh@297 -- # mlx=() 00:18:00.932 21:13:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:00.932 21:13:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.933 21:13:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:00.933 21:13:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:00.933 21:13:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:00.933 21:13:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:00.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:00.933 21:13:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:00.933 21:13:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:00.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:00.933 21:13:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:00.933 21:13:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.933 21:13:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.933 21:13:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:00.933 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:00.933 21:13:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.933 21:13:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:00.933 21:13:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.933 21:13:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.933 21:13:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:00.933 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:00.933 21:13:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.933 21:13:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:00.933 21:13:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:00.933 21:13:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:00.933 21:13:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.933 21:13:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.933 21:13:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.933 21:13:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:00.933 21:13:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.933 21:13:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.933 21:13:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:00.933 21:13:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.933 21:13:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.933 21:13:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:00.933 21:13:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:00.933 21:13:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.933 21:13:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.933 21:13:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.933 21:13:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.933 21:13:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:00.933 21:13:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.194 21:13:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.194 21:13:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.194 21:13:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:01.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:18:01.194 00:18:01.194 --- 10.0.0.2 ping statistics --- 00:18:01.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.194 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:18:01.194 21:13:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:18:01.194 00:18:01.194 --- 10.0.0.1 ping statistics --- 00:18:01.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.194 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:18:01.194 21:13:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.194 21:13:39 -- nvmf/common.sh@410 -- # return 0 00:18:01.194 21:13:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:01.194 21:13:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.194 21:13:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:01.194 21:13:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:01.194 21:13:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.194 21:13:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:01.194 21:13:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:01.194 21:13:39 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:01.194 21:13:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.194 21:13:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:01.194 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:01.194 21:13:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.194 21:13:39 -- nvmf/common.sh@469 -- # nvmfpid=2364457 00:18:01.194 21:13:39 -- nvmf/common.sh@470 -- # waitforlisten 2364457 00:18:01.194 21:13:39 -- common/autotest_common.sh@819 -- # '[' -z 2364457 ']' 00:18:01.194 21:13:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.194 21:13:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.194 21:13:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.194 21:13:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.194 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:01.194 [2024-06-08 21:13:39.167201] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:01.194 [2024-06-08 21:13:39.167243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.194 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.194 [2024-06-08 21:13:39.242552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.455 [2024-06-08 21:13:39.321136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.455 [2024-06-08 21:13:39.321279] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.455 [2024-06-08 21:13:39.321288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.455 [2024-06-08 21:13:39.321295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.455 [2024-06-08 21:13:39.321318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.027 21:13:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.027 21:13:39 -- common/autotest_common.sh@852 -- # return 0 00:18:02.027 21:13:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.027 21:13:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:02.027 21:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 21:13:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.027 21:13:40 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:02.027 21:13:40 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 [2024-06-08 21:13:40.029798] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 [2024-06-08 21:13:40.054068] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 malloc0 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.027 21:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.027 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 21:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.027 21:13:40 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:02.027 21:13:40 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:02.027 21:13:40 -- nvmf/common.sh@520 -- # config=() 00:18:02.027 21:13:40 -- nvmf/common.sh@520 -- # local subsystem config 00:18:02.027 21:13:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:02.027 21:13:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:02.027 { 00:18:02.027 "params": { 00:18:02.027 "name": "Nvme$subsystem", 00:18:02.027 "trtype": "$TEST_TRANSPORT", 00:18:02.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.027 "adrfam": "ipv4", 00:18:02.027 "trsvcid": "$NVMF_PORT", 00:18:02.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.027 "hdgst": ${hdgst:-false}, 00:18:02.027 "ddgst": ${ddgst:-false} 00:18:02.027 }, 00:18:02.027 "method": "bdev_nvme_attach_controller" 00:18:02.027 } 00:18:02.027 EOF 00:18:02.027 )") 00:18:02.027 21:13:40 -- nvmf/common.sh@542 -- # cat 00:18:02.027 21:13:40 -- nvmf/common.sh@544 -- # jq . 00:18:02.027 21:13:40 -- nvmf/common.sh@545 -- # IFS=, 00:18:02.028 21:13:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:02.028 "params": { 00:18:02.028 "name": "Nvme1", 00:18:02.028 "trtype": "tcp", 00:18:02.028 "traddr": "10.0.0.2", 00:18:02.028 "adrfam": "ipv4", 00:18:02.028 "trsvcid": "4420", 00:18:02.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.028 "hdgst": false, 00:18:02.028 "ddgst": false 00:18:02.028 }, 00:18:02.028 "method": "bdev_nvme_attach_controller" 00:18:02.028 }' 00:18:02.288 [2024-06-08 21:13:40.150234] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:02.288 [2024-06-08 21:13:40.150299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364715 ] 00:18:02.288 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.288 [2024-06-08 21:13:40.212887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.288 [2024-06-08 21:13:40.285958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.549 Running I/O for 10 seconds... 00:18:12.553 00:18:12.553 Latency(us) 00:18:12.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:12.553 Verification LBA range: start 0x0 length 0x1000 00:18:12.553 Nvme1n1 : 10.01 13184.73 103.01 0.00 0.00 9680.22 1378.99 20971.52 00:18:12.553 =================================================================================================================== 00:18:12.553 Total : 13184.73 103.01 0.00 0.00 9680.22 1378.99 20971.52 00:18:12.553 21:13:50 -- target/zcopy.sh@39 -- # perfpid=2366744 00:18:12.553 21:13:50 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:12.553 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.553 21:13:50 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:12.553 21:13:50 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:12.553 21:13:50 -- nvmf/common.sh@520 -- # config=() 00:18:12.553 21:13:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:12.553 21:13:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:12.553 21:13:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:12.553 { 00:18:12.553 "params": { 00:18:12.553 "name": "Nvme$subsystem", 00:18:12.553 "trtype": "$TEST_TRANSPORT", 00:18:12.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.553 "adrfam": "ipv4", 00:18:12.553 "trsvcid": "$NVMF_PORT", 00:18:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.553 "hdgst": ${hdgst:-false}, 00:18:12.553 "ddgst": ${ddgst:-false} 00:18:12.553 }, 00:18:12.553 "method": "bdev_nvme_attach_controller" 00:18:12.553 } 00:18:12.553 EOF 00:18:12.553 )") 00:18:12.553 21:13:50 -- nvmf/common.sh@542 -- # cat 00:18:12.553 [2024-06-08 21:13:50.594479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.553 [2024-06-08 21:13:50.594510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.553 21:13:50 -- nvmf/common.sh@544 -- # jq . 00:18:12.553 21:13:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:12.553 21:13:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:12.553 "params": { 00:18:12.553 "name": "Nvme1", 00:18:12.553 "trtype": "tcp", 00:18:12.553 "traddr": "10.0.0.2", 00:18:12.553 "adrfam": "ipv4", 00:18:12.553 "trsvcid": "4420", 00:18:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.553 "hdgst": false, 00:18:12.553 "ddgst": false 00:18:12.553 }, 00:18:12.553 "method": "bdev_nvme_attach_controller" 00:18:12.553 }' 00:18:12.553 [2024-06-08 21:13:50.606475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.553 [2024-06-08 21:13:50.606483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.553 [2024-06-08 21:13:50.618504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.553 [2024-06-08 21:13:50.618511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.553 [2024-06-08 21:13:50.630534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.553 [2024-06-08 21:13:50.630540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.553 [2024-06-08 21:13:50.633904] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:12.553 [2024-06-08 21:13:50.633949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366744 ] 00:18:12.553 [2024-06-08 21:13:50.642563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.553 [2024-06-08 21:13:50.642570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.814 [2024-06-08 21:13:50.654593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.814 [2024-06-08 21:13:50.654600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.814 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.814 [2024-06-08 21:13:50.666623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.666630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.678655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.678662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.690686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.690692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.691188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.815 [2024-06-08 21:13:50.702716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.702723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.714747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.714754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.726780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.726792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.738809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.738817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.750838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.750847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.753419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.815 [2024-06-08 21:13:50.762870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.762880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.774907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.774921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.786933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.786942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.798963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.798970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.810993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.811000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.823032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.823044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.835055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.835064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.847088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.847097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.859116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.859123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.871146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.871153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.883176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.883183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.815 [2024-06-08 21:13:50.895211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.815 [2024-06-08 21:13:50.895219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.907241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.907249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.919273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.919279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.931306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.931314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.943340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.943348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.955371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.955378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.967406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.967414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.979435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.979444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:50.991464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:50.991477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.003498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.003505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.015529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.015537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.027561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.027568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.039593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.039603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.081996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.082007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.091733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.091742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 Running I/O for 5 seconds... 00:18:13.076 [2024-06-08 21:13:51.110939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.110955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.120527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.120543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.135131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.135147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.148021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.148039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.076 [2024-06-08 21:13:51.160969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.076 [2024-06-08 21:13:51.160985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.174087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.174102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.187287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.187302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.200447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.200462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.213734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.213749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.226288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.226302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.239484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.239498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.252341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.252355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.265087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.265105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.278119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.278134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.290938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.290953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.303680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.303695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.316578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.316592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.329040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.329055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.342059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.342073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.355208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.355223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.368250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.368264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.381341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.381356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.393760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.393774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.406881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.406895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.338 [2024-06-08 21:13:51.419909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.338 [2024-06-08 21:13:51.419923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.432700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.432714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.445666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.445680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.458427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.458441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.471325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.471339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.484288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.484302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.497524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.497539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.510577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.510592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.522970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.522985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.535809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.535823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.548552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.548566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.561434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.561449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.573747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.573762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.586699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.586713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.599262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.599277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.612251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.612266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.625180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.625195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.638003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.638018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.650996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.651010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.663526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.663540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.675853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.675867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.600 [2024-06-08 21:13:51.688346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.600 [2024-06-08 21:13:51.688360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.700975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.700990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.713775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.713788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.726577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.726592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.739420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.739434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.752475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.752489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.765242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.765256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.778000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.778014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.790785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.790800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.803420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.803435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.816179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.816194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.828941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.828956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.841748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.841762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.854485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.854500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.862962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.862977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.871568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.861 [2024-06-08 21:13:51.871582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.861 [2024-06-08 21:13:51.880276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.880290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.888874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.888888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.897640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.897655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.905961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.905974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.914702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.914716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.923346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.923360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.932082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.932096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.940766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.940779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.862 [2024-06-08 21:13:51.949219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.862 [2024-06-08 21:13:51.949233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:51.958278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:51.958292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:51.966644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:51.966659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:51.975197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:51.975211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:51.983942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:51.983956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:51.992442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:51.992455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:52.000295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:52.000309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:52.009369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:52.009383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:52.017739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.123 [2024-06-08 21:13:52.017752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.123 [2024-06-08 21:13:52.026628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.026641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.035335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.035349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.043809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.043823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.051878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.051892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.060552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.060566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.069232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.069246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.077654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.077667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.086443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.086456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.095026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.095040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.104030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.104044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.112364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.112378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.121112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.121126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.129469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.129484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.137950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.137964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.146295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.146310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.155274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.155289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.163908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.163923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.172593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.172607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.181381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.181396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.190465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.190479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.198964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.198978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.124 [2024-06-08 21:13:52.207613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.124 [2024-06-08 21:13:52.207627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.385 [2024-06-08 21:13:52.216326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.216341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.224838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.224853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.233594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.233608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.241961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.241975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.250546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.250559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.259104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.259118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.267102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.267120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.276013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.276028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.284503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.284517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.293428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.293442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.301814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.301828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.310477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.310491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.318885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.318899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.327298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.327312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.336042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.336056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.344642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.344656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.353470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.353483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.362267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.362281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.370767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.370781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.379497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.379514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.388201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.388215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.396748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.396762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.405570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.405585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.414522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.414537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.423327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.423340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.432183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.432200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.440608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.440622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.449260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.449274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.457668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.457682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.466463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.466477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.386 [2024-06-08 21:13:52.475344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.386 [2024-06-08 21:13:52.475358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.484346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.484361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.492934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.492948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.501394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.501412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.509805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.509818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.518631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.518645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.527310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.527325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.535564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.535578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.544330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.544344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.552334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.552347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.561181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.561195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.569886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.569900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.578166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.578179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.587063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.587076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.595398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.595420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.604016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.604029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.612785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.612799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.621514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.621528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.630300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.630314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.639055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.639069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.647983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.647997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.656647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.656662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.664994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.665008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.673837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.673851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.682740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.648 [2024-06-08 21:13:52.682754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.648 [2024-06-08 21:13:52.691634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.691649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.649 [2024-06-08 21:13:52.700301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.700316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.649 [2024-06-08 21:13:52.708558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.708573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.649 [2024-06-08 21:13:52.717214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.717228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.649 [2024-06-08 21:13:52.725764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.725778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.649 [2024-06-08 21:13:52.734209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.649 [2024-06-08 21:13:52.734223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.743113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.743127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.751957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.751971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.760409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.760426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.768759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.768773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.777634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.777648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.786414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.786429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.794919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.794933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.803587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.803601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.812249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.812263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.820860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.910 [2024-06-08 21:13:52.820874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.910 [2024-06-08 21:13:52.829638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.829652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.838435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.838449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.847480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.847495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.856460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.856474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.865220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.865235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.874133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.874148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.882725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.882739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.891309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.891323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.899939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.899953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.908463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.908478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.917051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.917066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.925896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.925913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.934208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.934223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.942256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.942270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.950783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.950797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.959015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.959029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.967818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.967833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.976288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.976302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.984841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.984855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.911 [2024-06-08 21:13:52.993182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.911 [2024-06-08 21:13:52.993196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.001898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.001913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.010806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.010822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.019504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.019518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.027917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.027931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.036412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.036427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.044761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.044775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.053373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.053388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.061970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.061985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.070611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.070625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.079032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.079046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.087636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.087651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.096482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.096496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.104655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.104670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.113241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.113255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.121897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.121912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.130890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.130905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.139659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.139673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.148590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.148605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.157143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.157157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.165504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.165519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.174194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.174209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.183179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.183194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.191708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.191722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.200504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.200518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.209309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.209323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.218004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.218018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.226822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.226837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.235095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.235110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.243856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.243870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.251648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.251663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.260505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.260519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.192 [2024-06-08 21:13:53.268854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.192 [2024-06-08 21:13:53.268868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.277591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.277605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.286355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.286369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.295210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.295224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.303873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.303888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.312134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.312149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.321004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.321019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.329608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.329622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.337983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.337998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.347056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.347070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.355170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.355185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.364118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.364133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.372280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.372294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.380757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.380771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.389003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.389017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.398053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.398066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.406303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.406317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.414529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.414543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.423321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.423335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.431829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.431843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.440665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.440679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.449349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.449362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.457810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.457823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.466345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.466359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.454 [2024-06-08 21:13:53.474970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.454 [2024-06-08 21:13:53.474984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.483687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.483700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.492341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.492355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.501014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.501028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.509571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.509585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.518127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.518141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.526813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.526827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.535916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.535930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.455 [2024-06-08 21:13:53.543616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.455 [2024-06-08 21:13:53.543630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.552588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.552601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.561282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.561296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.569384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.569405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.578287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.578301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.586329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.586343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.595244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.595258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.604070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.604084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.612584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.612598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.621482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.621495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.629955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.629969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.638660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.638674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.647434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.647448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.655942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.655955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.664404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.664418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.672149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.672163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.681176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.681190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.689912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.689926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.698330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.698344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.707077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.707090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.715462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.715476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.723994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.724008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.733030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.715 [2024-06-08 21:13:53.733047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.715 [2024-06-08 21:13:53.741223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.741237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.749783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.749797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.758545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.758559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.767021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.767035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.775633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.775647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.784323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.784337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.793184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.793198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.716 [2024-06-08 21:13:53.801694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.716 [2024-06-08 21:13:53.801708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.810751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.810765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.819547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.819560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.828037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.828052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.836957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.836971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.845447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.845461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.854207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.854221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.862923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.862937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.871398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.871415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.880012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.880026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.888801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.888815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.897556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.897574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.906416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.906430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.914603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.914617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.923452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.923466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.931673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.931687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.940082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.940096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.948485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.948499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.957335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.957349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.966169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.966182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.974916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.974931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.983527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.983541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:53.992158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:53.992172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:54.005153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:54.005167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:54.018419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:54.018433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:54.031249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:54.031263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:54.044133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:54.044147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.975 [2024-06-08 21:13:54.056679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.975 [2024-06-08 21:13:54.056694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.069469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.069483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.082013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.082027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.094495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.094513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.107501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.107516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.120453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.120467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.133468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.133482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.146033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.146048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.158903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.158917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.172131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.172145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.184968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.184983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.197829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.197844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.210598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.210613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.223495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.223509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.236456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.236471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.249160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.249174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.262185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.262200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.274736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.274751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.287593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.287607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.299740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.299754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.313486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.313501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.235 [2024-06-08 21:13:54.326383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.235 [2024-06-08 21:13:54.326397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.339445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.339463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.351706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.351720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.364711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.364725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.377658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.377672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.390210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.390225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.403148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.403163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.416106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.416122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.428956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.428971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.441882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.441897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.454747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.454762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.467122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.467137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.479573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.479587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.492239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.492254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.504821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.504836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.517525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.517540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.530236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.530251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.542643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.542658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.555342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.555357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.568427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.568442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.496 [2024-06-08 21:13:54.581285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.496 [2024-06-08 21:13:54.581299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.756 [2024-06-08 21:13:54.594098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.756 [2024-06-08 21:13:54.594113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.756 [2024-06-08 21:13:54.607089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.756 [2024-06-08 21:13:54.607103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.756 [2024-06-08 21:13:54.619810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.756 [2024-06-08 21:13:54.619825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.756 [2024-06-08 21:13:54.632796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.756 [2024-06-08 21:13:54.632810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.756 [2024-06-08 21:13:54.645243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.645258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.657915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.657929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.670926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.670941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.684133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.684147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.697114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.697129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.709895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.709910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.722116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.722131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.735298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.735312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.748175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.748190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.761098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.761112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.773755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.773770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.787037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.787051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.800136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.800150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.813132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.813146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.825792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.825807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.757 [2024-06-08 21:13:54.838645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.757 [2024-06-08 21:13:54.838660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.851597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.851612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.864232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.864246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.877184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.877199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.890044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.890059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.903192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.903207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.916159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.916174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.929164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.929178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.941704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.941718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.954645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.954659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.967621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.967635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.980339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.980354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:54.993039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:54.993054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.006070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:55.006085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.019122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:55.019137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.032433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:55.032448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.045495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:55.045510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.058648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.017 [2024-06-08 21:13:55.058663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.017 [2024-06-08 21:13:55.071376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.018 [2024-06-08 21:13:55.071390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.018 [2024-06-08 21:13:55.084205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.018 [2024-06-08 21:13:55.084219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.018 [2024-06-08 21:13:55.096659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.018 [2024-06-08 21:13:55.096673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.109620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.109635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.122528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.122542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.135468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.135482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.148303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.148317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.161149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.161163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.173928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.173941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.186727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.186741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.199573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.199587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.212315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.212330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.225125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.225140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.238067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.238081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.251109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.251123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.264243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.264257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.276981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.276995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.289872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.289886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.302492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.302509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.314722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.278 [2024-06-08 21:13:55.314736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.278 [2024-06-08 21:13:55.327694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.279 [2024-06-08 21:13:55.327707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.279 [2024-06-08 21:13:55.340491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.279 [2024-06-08 21:13:55.340505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.279 [2024-06-08 21:13:55.353260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.279 [2024-06-08 21:13:55.353274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.279 [2024-06-08 21:13:55.366143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.279 [2024-06-08 21:13:55.366157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.379093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.379107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.392150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.392164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.412663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.412678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.425464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.425478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.438472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.438486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.451368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.451382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.464133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.464147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.476861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.476876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.489577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.489591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.502008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.502022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.514844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.514859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.527306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.527321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.539 [2024-06-08 21:13:55.539476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.539 [2024-06-08 21:13:55.539490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.552188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.552207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.564897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.564911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.577785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.577800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.590274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.590288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.602946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.602961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.615648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.615663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.540 [2024-06-08 21:13:55.628086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.540 [2024-06-08 21:13:55.628101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.640718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.640733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.653504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.653519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.666741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.666755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.679852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.679866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.692300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.692314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.704897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.704911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.717855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.717869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.730527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.730542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.743387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.743405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.756027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.756041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.768863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.768877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.781847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.781861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.794243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.794261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.806903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.806917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.820078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.820092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.832674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.832688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.845292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.845306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.858383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.858397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.870778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.870792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.801 [2024-06-08 21:13:55.883923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.801 [2024-06-08 21:13:55.883937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.897131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.897145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.910051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.910065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.922579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.922593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.935672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.935687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.948590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.948604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.960930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.960944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.974071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.974085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.987065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:55.987080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:55.999993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.000007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.013152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.013166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.026079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.026093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.039117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.039135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.052056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.052070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.065176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.065189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.078137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.078151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.090854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.090869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.103869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.103884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.113414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.113428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 00:18:18.062 Latency(us) 00:18:18.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.062 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:18.062 Nvme1n1 : 5.01 20172.07 157.59 0.00 0.00 6338.54 2348.37 21954.56 00:18:18.062 =================================================================================================================== 00:18:18.062 Total : 20172.07 157.59 0.00 0.00 6338.54 2348.37 21954.56 00:18:18.062 [2024-06-08 21:13:56.125441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.125454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.137473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.137486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.062 [2024-06-08 21:13:56.149506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.062 [2024-06-08 21:13:56.149520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.161534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.322 [2024-06-08 21:13:56.161546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.173561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.322 [2024-06-08 21:13:56.173571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.185591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.322 [2024-06-08 21:13:56.185600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.197622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.322 [2024-06-08 21:13:56.197629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.209655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.322 [2024-06-08 21:13:56.209665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.322 [2024-06-08 21:13:56.221685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.323 [2024-06-08 21:13:56.221694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.323 [2024-06-08 21:13:56.233716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.323 [2024-06-08 21:13:56.233726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.323 [2024-06-08 21:13:56.245747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.323 [2024-06-08 21:13:56.245756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2366744) - No such process 00:18:18.323 21:13:56 -- target/zcopy.sh@49 -- # wait 2366744 00:18:18.323 21:13:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:18.323 21:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:18.323 21:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:18.323 21:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:18.323 21:13:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:18.323 21:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:18.323 21:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:18.323 delay0 00:18:18.323 21:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:18.323 21:13:56 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:18.323 21:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:18.323 21:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:18.323 21:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:18.323 21:13:56 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:18.323 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.323 [2024-06-08 21:13:56.388064] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:24.905 Initializing NVMe Controllers 00:18:24.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:24.905 Initialization complete. Launching workers. 00:18:24.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:18:24.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:18:24.905 success 172, unsuccess 211, failed 0 00:18:24.905 21:14:02 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:24.905 21:14:02 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:24.905 21:14:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:24.905 21:14:02 -- nvmf/common.sh@116 -- # sync 00:18:24.905 21:14:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:24.905 21:14:02 -- nvmf/common.sh@119 -- # set +e 00:18:24.905 21:14:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:24.906 21:14:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:24.906 rmmod nvme_tcp 00:18:24.906 rmmod nvme_fabrics 00:18:24.906 rmmod nvme_keyring 00:18:24.906 21:14:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:24.906 21:14:02 -- nvmf/common.sh@123 -- # set -e 00:18:24.906 21:14:02 -- nvmf/common.sh@124 -- # return 0 00:18:24.906 21:14:02 -- nvmf/common.sh@477 -- # '[' -n 2364457 ']' 00:18:24.906 21:14:02 -- nvmf/common.sh@478 -- # killprocess 2364457 00:18:24.906 21:14:02 -- common/autotest_common.sh@926 -- # '[' -z 2364457 ']' 00:18:24.906 21:14:02 -- common/autotest_common.sh@930 -- # kill -0 2364457 00:18:24.906 21:14:02 -- common/autotest_common.sh@931 -- # uname 00:18:24.906 21:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.906 21:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2364457 00:18:24.906 21:14:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:24.906 21:14:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:24.906 21:14:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2364457' 00:18:24.906 killing process with pid 2364457 00:18:24.906 21:14:02 -- common/autotest_common.sh@945 -- # kill 2364457 00:18:24.906 21:14:02 -- common/autotest_common.sh@950 -- # wait 2364457 00:18:24.906 21:14:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:24.906 21:14:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:24.906 21:14:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:24.906 21:14:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.906 21:14:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:24.906 21:14:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.906 21:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.906 21:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.821 21:14:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:26.821 00:18:26.821 real 0m32.908s 00:18:26.821 user 0m44.853s 00:18:26.821 sys 0m9.790s 00:18:26.821 21:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.821 21:14:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.821 ************************************ 00:18:26.821 END TEST nvmf_zcopy 00:18:26.821 ************************************ 00:18:26.821 21:14:04 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:26.821 21:14:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:26.821 21:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.821 21:14:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.821 ************************************ 00:18:26.821 START TEST nvmf_nmic 00:18:26.821 ************************************ 00:18:26.821 21:14:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:27.082 * Looking for test storage... 00:18:27.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.082 21:14:04 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.082 21:14:04 -- nvmf/common.sh@7 -- # uname -s 00:18:27.082 21:14:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.082 21:14:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.082 21:14:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.082 21:14:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.082 21:14:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.082 21:14:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.082 21:14:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.082 21:14:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.082 21:14:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.082 21:14:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.082 21:14:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.082 21:14:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.082 21:14:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.082 21:14:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.082 21:14:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.082 21:14:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.082 21:14:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.082 21:14:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.082 21:14:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.082 21:14:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 21:14:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 21:14:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 21:14:04 -- paths/export.sh@5 -- # export PATH 00:18:27.082 21:14:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 21:14:04 -- nvmf/common.sh@46 -- # : 0 00:18:27.082 21:14:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:27.082 21:14:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:27.082 21:14:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:27.082 21:14:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.082 21:14:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.082 21:14:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:27.082 21:14:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:27.082 21:14:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:27.082 21:14:04 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.082 21:14:04 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.082 21:14:04 -- target/nmic.sh@14 -- # nvmftestinit 00:18:27.082 21:14:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:27.082 21:14:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.082 21:14:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:27.082 21:14:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:27.082 21:14:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:27.082 21:14:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.082 21:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.083 21:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.083 21:14:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:27.083 21:14:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:27.083 21:14:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:27.083 21:14:04 -- common/autotest_common.sh@10 -- # set +x 00:18:33.670 21:14:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.670 21:14:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:33.670 21:14:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:33.670 21:14:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:33.670 21:14:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:33.670 21:14:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:33.670 21:14:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:33.670 21:14:11 -- nvmf/common.sh@294 -- # net_devs=() 00:18:33.670 21:14:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:33.670 21:14:11 -- nvmf/common.sh@295 -- # e810=() 00:18:33.670 21:14:11 -- nvmf/common.sh@295 -- # local -ga e810 00:18:33.670 21:14:11 -- nvmf/common.sh@296 -- # x722=() 00:18:33.670 21:14:11 -- nvmf/common.sh@296 -- # local -ga x722 00:18:33.670 21:14:11 -- nvmf/common.sh@297 -- # mlx=() 00:18:33.670 21:14:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:33.670 21:14:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.670 21:14:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:33.670 21:14:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:33.670 21:14:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:33.670 21:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.670 21:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:33.670 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:33.670 21:14:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.670 21:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:33.670 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:33.670 21:14:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:33.670 21:14:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:33.670 21:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.670 21:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.670 21:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.670 21:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.670 21:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:33.670 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:33.670 21:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.670 21:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.670 21:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.670 21:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.670 21:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.670 21:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:33.670 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:33.670 21:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.671 21:14:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:33.671 21:14:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:33.671 21:14:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:33.671 21:14:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:33.671 21:14:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:33.671 21:14:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.671 21:14:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.671 21:14:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.671 21:14:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:33.671 21:14:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.671 21:14:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.671 21:14:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:33.671 21:14:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.671 21:14:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.671 21:14:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:33.671 21:14:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:33.671 21:14:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.671 21:14:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.932 21:14:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.932 21:14:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.932 21:14:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:33.932 21:14:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.932 21:14:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.932 21:14:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.932 21:14:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:33.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:18:33.932 00:18:33.932 --- 10.0.0.2 ping statistics --- 00:18:33.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.932 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:18:33.932 21:14:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:18:33.932 00:18:33.932 --- 10.0.0.1 ping statistics --- 00:18:33.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.932 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:18:33.932 21:14:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.932 21:14:12 -- nvmf/common.sh@410 -- # return 0 00:18:33.932 21:14:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:33.932 21:14:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.932 21:14:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:33.932 21:14:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:33.932 21:14:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.932 21:14:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:33.932 21:14:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:34.193 21:14:12 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:34.193 21:14:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:34.193 21:14:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:34.193 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:34.193 21:14:12 -- nvmf/common.sh@469 -- # nvmfpid=2373234 00:18:34.193 21:14:12 -- nvmf/common.sh@470 -- # waitforlisten 2373234 00:18:34.193 21:14:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.193 21:14:12 -- common/autotest_common.sh@819 -- # '[' -z 2373234 ']' 00:18:34.193 21:14:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.193 21:14:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:34.193 21:14:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.193 21:14:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:34.193 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:34.193 [2024-06-08 21:14:12.107592] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:34.193 [2024-06-08 21:14:12.107656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.193 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.193 [2024-06-08 21:14:12.177417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.193 [2024-06-08 21:14:12.253386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:34.193 [2024-06-08 21:14:12.253526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.193 [2024-06-08 21:14:12.253536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.193 [2024-06-08 21:14:12.253545] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.193 [2024-06-08 21:14:12.253690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.193 [2024-06-08 21:14:12.253808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.193 [2024-06-08 21:14:12.253969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.193 [2024-06-08 21:14:12.253970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.137 21:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:35.137 21:14:12 -- common/autotest_common.sh@852 -- # return 0 00:18:35.137 21:14:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:35.137 21:14:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 21:14:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.137 21:14:12 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 [2024-06-08 21:14:12.928577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.137 21:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:12 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 Malloc0 00:18:35.137 21:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:12 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 21:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:12 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 21:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:12 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 [2024-06-08 21:14:12.987988] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.137 21:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:12 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:35.137 test case1: single bdev can't be used in multiple subsystems 00:18:35.137 21:14:12 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:35.137 21:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 21:14:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:13 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:35.137 21:14:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 21:14:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:13 -- target/nmic.sh@28 -- # nmic_status=0 00:18:35.137 21:14:13 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:35.137 21:14:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 [2024-06-08 21:14:13.023921] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:35.137 [2024-06-08 21:14:13.023938] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:35.137 [2024-06-08 21:14:13.023946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.137 request: 00:18:35.137 { 00:18:35.137 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:35.137 "namespace": { 00:18:35.137 "bdev_name": "Malloc0" 00:18:35.137 }, 00:18:35.137 "method": "nvmf_subsystem_add_ns", 00:18:35.137 "req_id": 1 00:18:35.137 } 00:18:35.137 Got JSON-RPC error response 00:18:35.137 response: 00:18:35.137 { 00:18:35.137 "code": -32602, 00:18:35.137 "message": "Invalid parameters" 00:18:35.137 } 00:18:35.137 21:14:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:35.137 21:14:13 -- target/nmic.sh@29 -- # nmic_status=1 00:18:35.137 21:14:13 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:35.137 21:14:13 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:35.137 Adding namespace failed - expected result. 00:18:35.137 21:14:13 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:35.137 test case2: host connect to nvmf target in multiple paths 00:18:35.137 21:14:13 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.137 21:14:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.137 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.137 [2024-06-08 21:14:13.036076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:35.137 21:14:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.137 21:14:13 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:36.516 21:14:14 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:38.426 21:14:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:38.426 21:14:16 -- common/autotest_common.sh@1177 -- # local i=0 00:18:38.426 21:14:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.426 21:14:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:38.426 21:14:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:40.394 21:14:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:40.394 21:14:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:40.394 21:14:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:40.394 21:14:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:40.394 21:14:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.394 21:14:18 -- common/autotest_common.sh@1187 -- # return 0 00:18:40.394 21:14:18 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:40.394 [global] 00:18:40.394 thread=1 00:18:40.394 invalidate=1 00:18:40.394 rw=write 00:18:40.394 time_based=1 00:18:40.394 runtime=1 00:18:40.394 ioengine=libaio 00:18:40.394 direct=1 00:18:40.394 bs=4096 00:18:40.394 iodepth=1 00:18:40.394 norandommap=0 00:18:40.394 numjobs=1 00:18:40.394 00:18:40.394 verify_dump=1 00:18:40.394 verify_backlog=512 00:18:40.394 verify_state_save=0 00:18:40.394 do_verify=1 00:18:40.394 verify=crc32c-intel 00:18:40.394 [job0] 00:18:40.394 filename=/dev/nvme0n1 00:18:40.394 Could not set queue depth (nvme0n1) 00:18:40.655 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:40.655 fio-3.35 00:18:40.655 Starting 1 thread 00:18:42.033 00:18:42.033 job0: (groupid=0, jobs=1): err= 0: pid=2374684: Sat Jun 8 21:14:19 2024 00:18:42.033 read: IOPS=114, BW=459KiB/s (470kB/s)(464KiB/1010msec) 00:18:42.033 slat (nsec): min=7047, max=45587, avg=26356.59, stdev=5403.39 00:18:42.033 clat (usec): min=998, max=42697, avg=4365.91, stdev=10975.32 00:18:42.033 lat (usec): min=1026, max=42723, avg=4392.26, stdev=10975.01 00:18:42.033 clat percentiles (usec): 00:18:42.033 | 1.00th=[ 1020], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1156], 00:18:42.033 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1205], 60.00th=[ 1221], 00:18:42.033 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1319], 95.00th=[42206], 00:18:42.033 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:42.033 | 99.99th=[42730] 00:18:42.033 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:18:42.033 slat (usec): min=10, max=28068, avg=87.47, stdev=1239.01 00:18:42.033 clat (usec): min=572, max=1041, avg=879.12, stdev=75.24 00:18:42.033 lat (usec): min=605, max=28940, avg=966.58, stdev=1241.05 00:18:42.033 clat percentiles (usec): 00:18:42.033 | 1.00th=[ 676], 5.00th=[ 725], 10.00th=[ 783], 20.00th=[ 824], 00:18:42.033 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[ 898], 60.00th=[ 914], 00:18:42.033 | 70.00th=[ 922], 80.00th=[ 938], 90.00th=[ 963], 95.00th=[ 979], 00:18:42.033 | 99.00th=[ 1020], 99.50th=[ 1037], 99.90th=[ 1045], 99.95th=[ 1045], 00:18:42.033 | 99.99th=[ 1045] 00:18:42.033 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.033 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.033 lat (usec) : 750=4.94%, 1000=74.36% 00:18:42.033 lat (msec) : 2=19.27%, 50=1.43% 00:18:42.033 cpu : usr=0.89%, sys=2.18%, ctx=631, majf=0, minf=1 00:18:42.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.033 issued rwts: total=116,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.033 00:18:42.033 Run status group 0 (all jobs): 00:18:42.033 READ: bw=459KiB/s (470kB/s), 459KiB/s-459KiB/s (470kB/s-470kB/s), io=464KiB (475kB), run=1010-1010msec 00:18:42.033 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:18:42.033 00:18:42.033 Disk stats (read/write): 00:18:42.033 nvme0n1: ios=165/512, merge=0/0, ticks=945/410, in_queue=1355, util=99.00% 00:18:42.033 21:14:19 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:42.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:42.033 21:14:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:42.033 21:14:19 -- common/autotest_common.sh@1198 -- # local i=0 00:18:42.033 21:14:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:42.033 21:14:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.033 21:14:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:42.033 21:14:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.033 21:14:19 -- common/autotest_common.sh@1210 -- # return 0 00:18:42.033 21:14:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:42.033 21:14:19 -- target/nmic.sh@53 -- # nvmftestfini 00:18:42.033 21:14:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:42.033 21:14:19 -- nvmf/common.sh@116 -- # sync 00:18:42.033 21:14:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:42.033 21:14:19 -- nvmf/common.sh@119 -- # set +e 00:18:42.033 21:14:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:42.033 21:14:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:42.033 rmmod nvme_tcp 00:18:42.033 rmmod nvme_fabrics 00:18:42.033 rmmod nvme_keyring 00:18:42.033 21:14:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:42.033 21:14:19 -- nvmf/common.sh@123 -- # set -e 00:18:42.033 21:14:19 -- nvmf/common.sh@124 -- # return 0 00:18:42.033 21:14:19 -- nvmf/common.sh@477 -- # '[' -n 2373234 ']' 00:18:42.033 21:14:19 -- nvmf/common.sh@478 -- # killprocess 2373234 00:18:42.033 21:14:19 -- common/autotest_common.sh@926 -- # '[' -z 2373234 ']' 00:18:42.033 21:14:19 -- common/autotest_common.sh@930 -- # kill -0 2373234 00:18:42.033 21:14:19 -- common/autotest_common.sh@931 -- # uname 00:18:42.033 21:14:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:42.033 21:14:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2373234 00:18:42.033 21:14:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:42.033 21:14:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:42.033 21:14:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2373234' 00:18:42.033 killing process with pid 2373234 00:18:42.033 21:14:20 -- common/autotest_common.sh@945 -- # kill 2373234 00:18:42.033 21:14:20 -- common/autotest_common.sh@950 -- # wait 2373234 00:18:42.294 21:14:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:42.294 21:14:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:42.294 21:14:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:42.294 21:14:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.294 21:14:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:42.294 21:14:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.294 21:14:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.294 21:14:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.201 21:14:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:44.201 00:18:44.201 real 0m17.420s 00:18:44.201 user 0m51.334s 00:18:44.201 sys 0m6.004s 00:18:44.201 21:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.201 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 ************************************ 00:18:44.201 END TEST nvmf_nmic 00:18:44.201 ************************************ 00:18:44.462 21:14:22 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:44.462 21:14:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:44.462 21:14:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.462 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:44.462 ************************************ 00:18:44.462 START TEST nvmf_fio_target 00:18:44.462 ************************************ 00:18:44.462 21:14:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:44.462 * Looking for test storage... 00:18:44.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.462 21:14:22 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.462 21:14:22 -- nvmf/common.sh@7 -- # uname -s 00:18:44.462 21:14:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.462 21:14:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.462 21:14:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.462 21:14:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.462 21:14:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.462 21:14:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.462 21:14:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.462 21:14:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.462 21:14:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.462 21:14:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.462 21:14:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.462 21:14:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.462 21:14:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.462 21:14:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.462 21:14:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.462 21:14:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.462 21:14:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.462 21:14:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.462 21:14:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.462 21:14:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.462 21:14:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.462 21:14:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.462 21:14:22 -- paths/export.sh@5 -- # export PATH 00:18:44.462 21:14:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.462 21:14:22 -- nvmf/common.sh@46 -- # : 0 00:18:44.462 21:14:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:44.462 21:14:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:44.462 21:14:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:44.462 21:14:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.462 21:14:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.462 21:14:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:44.462 21:14:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:44.462 21:14:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:44.463 21:14:22 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.463 21:14:22 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.463 21:14:22 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.463 21:14:22 -- target/fio.sh@16 -- # nvmftestinit 00:18:44.463 21:14:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:44.463 21:14:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.463 21:14:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:44.463 21:14:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:44.463 21:14:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:44.463 21:14:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.463 21:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.463 21:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.463 21:14:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:44.463 21:14:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:44.463 21:14:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:44.463 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.594 21:14:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:52.594 21:14:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:52.594 21:14:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:52.594 21:14:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:52.594 21:14:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:52.594 21:14:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:52.594 21:14:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:52.594 21:14:29 -- nvmf/common.sh@294 -- # net_devs=() 00:18:52.594 21:14:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:52.594 21:14:29 -- nvmf/common.sh@295 -- # e810=() 00:18:52.594 21:14:29 -- nvmf/common.sh@295 -- # local -ga e810 00:18:52.594 21:14:29 -- nvmf/common.sh@296 -- # x722=() 00:18:52.594 21:14:29 -- nvmf/common.sh@296 -- # local -ga x722 00:18:52.594 21:14:29 -- nvmf/common.sh@297 -- # mlx=() 00:18:52.594 21:14:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:52.594 21:14:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.594 21:14:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:52.594 21:14:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:52.594 21:14:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:52.594 21:14:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:52.595 21:14:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.595 21:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:52.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:52.595 21:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.595 21:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:52.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:52.595 21:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.595 21:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.595 21:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.595 21:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:52.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:52.595 21:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.595 21:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.595 21:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.595 21:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.595 21:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:52.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:52.595 21:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.595 21:14:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:52.595 21:14:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:52.595 21:14:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.595 21:14:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.595 21:14:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.595 21:14:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:52.595 21:14:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.595 21:14:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.595 21:14:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:52.595 21:14:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.595 21:14:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.595 21:14:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:52.595 21:14:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:52.595 21:14:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.595 21:14:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.595 21:14:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.595 21:14:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.595 21:14:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:52.595 21:14:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.595 21:14:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.595 21:14:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.595 21:14:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:52.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:18:52.595 00:18:52.595 --- 10.0.0.2 ping statistics --- 00:18:52.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.595 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:18:52.595 21:14:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:18:52.595 00:18:52.595 --- 10.0.0.1 ping statistics --- 00:18:52.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.595 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:18:52.595 21:14:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.595 21:14:29 -- nvmf/common.sh@410 -- # return 0 00:18:52.595 21:14:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:52.595 21:14:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.595 21:14:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:52.595 21:14:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.595 21:14:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:52.595 21:14:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:52.595 21:14:29 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:52.595 21:14:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:52.595 21:14:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:52.595 21:14:29 -- common/autotest_common.sh@10 -- # set +x 00:18:52.595 21:14:29 -- nvmf/common.sh@469 -- # nvmfpid=2379127 00:18:52.595 21:14:29 -- nvmf/common.sh@470 -- # waitforlisten 2379127 00:18:52.595 21:14:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.595 21:14:29 -- common/autotest_common.sh@819 -- # '[' -z 2379127 ']' 00:18:52.595 21:14:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.595 21:14:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:52.595 21:14:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.595 21:14:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:52.595 21:14:29 -- common/autotest_common.sh@10 -- # set +x 00:18:52.595 [2024-06-08 21:14:29.669243] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:52.595 [2024-06-08 21:14:29.669308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.595 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.595 [2024-06-08 21:14:29.739845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.595 [2024-06-08 21:14:29.813479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.595 [2024-06-08 21:14:29.813610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.595 [2024-06-08 21:14:29.813620] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.595 [2024-06-08 21:14:29.813628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.595 [2024-06-08 21:14:29.813801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.595 [2024-06-08 21:14:29.813916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.595 [2024-06-08 21:14:29.814072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.595 [2024-06-08 21:14:29.814073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.595 21:14:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:52.595 21:14:30 -- common/autotest_common.sh@852 -- # return 0 00:18:52.595 21:14:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:52.595 21:14:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:52.595 21:14:30 -- common/autotest_common.sh@10 -- # set +x 00:18:52.595 21:14:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.595 21:14:30 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:52.595 [2024-06-08 21:14:30.623008] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.595 21:14:30 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.854 21:14:30 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:52.854 21:14:30 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.113 21:14:30 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:53.113 21:14:30 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.113 21:14:31 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:53.113 21:14:31 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.372 21:14:31 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:53.372 21:14:31 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:53.630 21:14:31 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.630 21:14:31 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:53.630 21:14:31 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.888 21:14:31 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:53.888 21:14:31 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.147 21:14:32 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:54.147 21:14:32 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:54.147 21:14:32 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:54.406 21:14:32 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:54.406 21:14:32 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.664 21:14:32 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:54.664 21:14:32 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.665 21:14:32 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.923 [2024-06-08 21:14:32.812293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.923 21:14:32 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:54.923 21:14:33 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:55.179 21:14:33 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:57.082 21:14:34 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:57.082 21:14:34 -- common/autotest_common.sh@1177 -- # local i=0 00:18:57.082 21:14:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.082 21:14:34 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:18:57.082 21:14:34 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:18:57.082 21:14:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:59.002 21:14:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:59.002 21:14:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:59.002 21:14:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.002 21:14:36 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:18:59.002 21:14:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.002 21:14:36 -- common/autotest_common.sh@1187 -- # return 0 00:18:59.002 21:14:36 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:59.002 [global] 00:18:59.002 thread=1 00:18:59.002 invalidate=1 00:18:59.002 rw=write 00:18:59.002 time_based=1 00:18:59.002 runtime=1 00:18:59.002 ioengine=libaio 00:18:59.002 direct=1 00:18:59.002 bs=4096 00:18:59.002 iodepth=1 00:18:59.002 norandommap=0 00:18:59.002 numjobs=1 00:18:59.002 00:18:59.002 verify_dump=1 00:18:59.002 verify_backlog=512 00:18:59.002 verify_state_save=0 00:18:59.002 do_verify=1 00:18:59.002 verify=crc32c-intel 00:18:59.002 [job0] 00:18:59.002 filename=/dev/nvme0n1 00:18:59.002 [job1] 00:18:59.002 filename=/dev/nvme0n2 00:18:59.002 [job2] 00:18:59.002 filename=/dev/nvme0n3 00:18:59.002 [job3] 00:18:59.002 filename=/dev/nvme0n4 00:18:59.002 Could not set queue depth (nvme0n1) 00:18:59.002 Could not set queue depth (nvme0n2) 00:18:59.002 Could not set queue depth (nvme0n3) 00:18:59.003 Could not set queue depth (nvme0n4) 00:18:59.261 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.261 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.261 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.261 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.261 fio-3.35 00:18:59.261 Starting 4 threads 00:19:00.647 00:19:00.647 job0: (groupid=0, jobs=1): err= 0: pid=2380975: Sat Jun 8 21:14:38 2024 00:19:00.647 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:00.647 slat (nsec): min=6525, max=44348, avg=23441.23, stdev=4732.06 00:19:00.647 clat (usec): min=438, max=1076, avg=844.58, stdev=102.45 00:19:00.647 lat (usec): min=462, max=1100, avg=868.02, stdev=103.00 00:19:00.647 clat percentiles (usec): 00:19:00.647 | 1.00th=[ 594], 5.00th=[ 652], 10.00th=[ 701], 20.00th=[ 758], 00:19:00.647 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[ 857], 60.00th=[ 889], 00:19:00.647 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 979], 00:19:00.647 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1074], 00:19:00.647 | 99.99th=[ 1074] 00:19:00.647 write: IOPS=932, BW=3728KiB/s (3818kB/s)(3732KiB/1001msec); 0 zone resets 00:19:00.648 slat (nsec): min=9325, max=66237, avg=29253.30, stdev=7666.88 00:19:00.648 clat (usec): min=184, max=832, avg=554.56, stdev=117.29 00:19:00.648 lat (usec): min=194, max=863, avg=583.81, stdev=119.77 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 273], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 449], 00:19:00.648 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 578], 00:19:00.648 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 742], 00:19:00.648 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 832], 00:19:00.648 | 99.99th=[ 832] 00:19:00.648 bw ( KiB/s): min= 4096, max= 4096, per=34.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.648 lat (usec) : 250=0.48%, 500=18.89%, 750=49.27%, 1000=30.52% 00:19:00.648 lat (msec) : 2=0.83% 00:19:00.648 cpu : usr=3.00%, sys=3.20%, ctx=1446, majf=0, minf=1 00:19:00.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 issued rwts: total=512,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.648 job1: (groupid=0, jobs=1): err= 0: pid=2380976: Sat Jun 8 21:14:38 2024 00:19:00.648 read: IOPS=527, BW=2110KiB/s (2161kB/s)(2112KiB/1001msec) 00:19:00.648 slat (nsec): min=6688, max=54646, avg=22938.75, stdev=6814.42 00:19:00.648 clat (usec): min=381, max=1006, avg=780.31, stdev=107.91 00:19:00.648 lat (usec): min=389, max=1030, avg=803.25, stdev=110.00 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 490], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 685], 00:19:00.648 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 832], 00:19:00.648 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 914], 00:19:00.648 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1004], 99.95th=[ 1004], 00:19:00.648 | 99.99th=[ 1004] 00:19:00.648 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:00.648 slat (nsec): min=9303, max=62998, avg=31082.29, stdev=8391.87 00:19:00.648 clat (usec): min=149, max=981, avg=520.34, stdev=115.26 00:19:00.648 lat (usec): min=162, max=992, avg=551.42, stdev=117.16 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 253], 5.00th=[ 302], 10.00th=[ 379], 20.00th=[ 416], 00:19:00.648 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 545], 00:19:00.648 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 693], 00:19:00.648 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 881], 99.95th=[ 979], 00:19:00.648 | 99.99th=[ 979] 00:19:00.648 bw ( KiB/s): min= 4096, max= 4096, per=34.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.648 lat (usec) : 250=0.58%, 500=26.03%, 750=50.77%, 1000=22.49% 00:19:00.648 lat (msec) : 2=0.13% 00:19:00.648 cpu : usr=2.80%, sys=3.90%, ctx=1555, majf=0, minf=1 00:19:00.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.648 job2: (groupid=0, jobs=1): err= 0: pid=2380978: Sat Jun 8 21:14:38 2024 00:19:00.648 read: IOPS=362, BW=1451KiB/s (1485kB/s)(1452KiB/1001msec) 00:19:00.648 slat (nsec): min=24736, max=47585, avg=25630.50, stdev=2972.77 00:19:00.648 clat (usec): min=1060, max=1654, avg=1412.27, stdev=94.61 00:19:00.648 lat (usec): min=1085, max=1679, avg=1437.90, stdev=94.74 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 1172], 5.00th=[ 1254], 10.00th=[ 1303], 20.00th=[ 1336], 00:19:00.648 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[ 1434], 00:19:00.648 | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1532], 95.00th=[ 1549], 00:19:00.648 | 99.00th=[ 1614], 99.50th=[ 1647], 99.90th=[ 1647], 99.95th=[ 1647], 00:19:00.648 | 99.99th=[ 1647] 00:19:00.648 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:00.648 slat (nsec): min=10249, max=51865, avg=33575.96, stdev=5317.07 00:19:00.648 clat (usec): min=510, max=1128, avg=886.09, stdev=92.70 00:19:00.648 lat (usec): min=522, max=1161, avg=919.67, stdev=93.00 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 660], 5.00th=[ 717], 10.00th=[ 758], 20.00th=[ 816], 00:19:00.648 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[ 889], 60.00th=[ 914], 00:19:00.648 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1029], 00:19:00.648 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:19:00.648 | 99.99th=[ 1123] 00:19:00.648 bw ( KiB/s): min= 4096, max= 4096, per=34.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.648 lat (usec) : 750=5.26%, 1000=47.54% 00:19:00.648 lat (msec) : 2=47.20% 00:19:00.648 cpu : usr=1.60%, sys=2.50%, ctx=876, majf=0, minf=1 00:19:00.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 issued rwts: total=363,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.648 job3: (groupid=0, jobs=1): err= 0: pid=2380979: Sat Jun 8 21:14:38 2024 00:19:00.648 read: IOPS=422, BW=1689KiB/s (1729kB/s)(1692KiB/1002msec) 00:19:00.648 slat (nsec): min=8516, max=47601, avg=25974.34, stdev=3433.25 00:19:00.648 clat (usec): min=854, max=1507, avg=1245.57, stdev=108.62 00:19:00.648 lat (usec): min=881, max=1532, avg=1271.54, stdev=108.85 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 971], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1172], 00:19:00.648 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1287], 00:19:00.648 | 70.00th=[ 1303], 80.00th=[ 1336], 90.00th=[ 1369], 95.00th=[ 1385], 00:19:00.648 | 99.00th=[ 1450], 99.50th=[ 1467], 99.90th=[ 1516], 99.95th=[ 1516], 00:19:00.648 | 99.99th=[ 1516] 00:19:00.648 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:00.648 slat (nsec): min=9066, max=53086, avg=32356.42, stdev=5666.72 00:19:00.648 clat (usec): min=410, max=1087, avg=857.90, stdev=122.62 00:19:00.648 lat (usec): min=442, max=1120, avg=890.26, stdev=124.17 00:19:00.648 clat percentiles (usec): 00:19:00.648 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 676], 20.00th=[ 742], 00:19:00.648 | 30.00th=[ 816], 40.00th=[ 848], 50.00th=[ 889], 60.00th=[ 914], 00:19:00.648 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 979], 95.00th=[ 1020], 00:19:00.648 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:00.648 | 99.99th=[ 1090] 00:19:00.648 bw ( KiB/s): min= 4096, max= 4096, per=34.42%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.648 lat (usec) : 500=0.43%, 750=10.59%, 1000=41.39% 00:19:00.648 lat (msec) : 2=47.59% 00:19:00.648 cpu : usr=1.60%, sys=4.10%, ctx=935, majf=0, minf=1 00:19:00.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.648 issued rwts: total=423,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.648 00:19:00.648 Run status group 0 (all jobs): 00:19:00.648 READ: bw=7289KiB/s (7464kB/s), 1451KiB/s-2110KiB/s (1485kB/s-2161kB/s), io=7304KiB (7479kB), run=1001-1002msec 00:19:00.648 WRITE: bw=11.6MiB/s (12.2MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=11.6MiB (12.2MB), run=1001-1002msec 00:19:00.648 00:19:00.648 Disk stats (read/write): 00:19:00.648 nvme0n1: ios=562/658, merge=0/0, ticks=474/335, in_queue=809, util=87.27% 00:19:00.648 nvme0n2: ios=535/736, merge=0/0, ticks=1347/337, in_queue=1684, util=97.14% 00:19:00.648 nvme0n3: ios=273/512, merge=0/0, ticks=1281/427, in_queue=1708, util=96.93% 00:19:00.648 nvme0n4: ios=291/512, merge=0/0, ticks=336/370, in_queue=706, util=89.41% 00:19:00.648 21:14:38 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:00.648 [global] 00:19:00.648 thread=1 00:19:00.648 invalidate=1 00:19:00.648 rw=randwrite 00:19:00.648 time_based=1 00:19:00.648 runtime=1 00:19:00.648 ioengine=libaio 00:19:00.648 direct=1 00:19:00.648 bs=4096 00:19:00.648 iodepth=1 00:19:00.648 norandommap=0 00:19:00.648 numjobs=1 00:19:00.648 00:19:00.648 verify_dump=1 00:19:00.648 verify_backlog=512 00:19:00.648 verify_state_save=0 00:19:00.648 do_verify=1 00:19:00.648 verify=crc32c-intel 00:19:00.648 [job0] 00:19:00.648 filename=/dev/nvme0n1 00:19:00.648 [job1] 00:19:00.648 filename=/dev/nvme0n2 00:19:00.648 [job2] 00:19:00.648 filename=/dev/nvme0n3 00:19:00.648 [job3] 00:19:00.648 filename=/dev/nvme0n4 00:19:00.648 Could not set queue depth (nvme0n1) 00:19:00.648 Could not set queue depth (nvme0n2) 00:19:00.648 Could not set queue depth (nvme0n3) 00:19:00.648 Could not set queue depth (nvme0n4) 00:19:00.908 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.908 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.908 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.908 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.908 fio-3.35 00:19:00.908 Starting 4 threads 00:19:02.328 00:19:02.328 job0: (groupid=0, jobs=1): err= 0: pid=2381440: Sat Jun 8 21:14:40 2024 00:19:02.328 read: IOPS=396, BW=1586KiB/s (1624kB/s)(1588KiB/1001msec) 00:19:02.328 slat (nsec): min=25530, max=56220, avg=26853.81, stdev=3837.28 00:19:02.328 clat (usec): min=984, max=1385, avg=1223.88, stdev=60.17 00:19:02.328 lat (usec): min=1010, max=1411, avg=1250.73, stdev=60.25 00:19:02.328 clat percentiles (usec): 00:19:02.328 | 1.00th=[ 1029], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1188], 00:19:02.328 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[ 1237], 60.00th=[ 1237], 00:19:02.328 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1287], 95.00th=[ 1303], 00:19:02.328 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1385], 99.95th=[ 1385], 00:19:02.328 | 99.99th=[ 1385] 00:19:02.328 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:02.328 slat (nsec): min=10036, max=54914, avg=33410.89, stdev=3797.25 00:19:02.328 clat (usec): min=569, max=1328, avg=931.03, stdev=78.98 00:19:02.328 lat (usec): min=602, max=1361, avg=964.44, stdev=79.22 00:19:02.328 clat percentiles (usec): 00:19:02.328 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 824], 20.00th=[ 873], 00:19:02.328 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 955], 00:19:02.328 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1037], 00:19:02.328 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1336], 99.95th=[ 1336], 00:19:02.328 | 99.99th=[ 1336] 00:19:02.328 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.328 lat (usec) : 750=1.43%, 1000=46.53% 00:19:02.328 lat (msec) : 2=52.04% 00:19:02.328 cpu : usr=2.50%, sys=3.30%, ctx=913, majf=0, minf=1 00:19:02.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.328 issued rwts: total=397,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.328 job1: (groupid=0, jobs=1): err= 0: pid=2381457: Sat Jun 8 21:14:40 2024 00:19:02.328 read: IOPS=486, BW=1946KiB/s (1993kB/s)(1948KiB/1001msec) 00:19:02.328 slat (nsec): min=7029, max=60196, avg=26185.84, stdev=4761.81 00:19:02.328 clat (usec): min=875, max=1431, avg=1191.68, stdev=79.78 00:19:02.328 lat (usec): min=901, max=1458, avg=1217.86, stdev=80.56 00:19:02.328 clat percentiles (usec): 00:19:02.328 | 1.00th=[ 930], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1139], 00:19:02.329 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:19:02.329 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1303], 00:19:02.329 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1434], 99.95th=[ 1434], 00:19:02.329 | 99.99th=[ 1434] 00:19:02.329 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:02.329 slat (nsec): min=9628, max=69042, avg=30636.22, stdev=9544.40 00:19:02.329 clat (usec): min=456, max=975, avg=743.31, stdev=93.19 00:19:02.329 lat (usec): min=466, max=1009, avg=773.95, stdev=97.32 00:19:02.329 clat percentiles (usec): 00:19:02.329 | 1.00th=[ 498], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 668], 00:19:02.329 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:19:02.329 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 889], 00:19:02.329 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 979], 99.95th=[ 979], 00:19:02.329 | 99.99th=[ 979] 00:19:02.329 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.329 lat (usec) : 500=0.60%, 750=25.63%, 1000=26.43% 00:19:02.329 lat (msec) : 2=47.35% 00:19:02.329 cpu : usr=2.10%, sys=3.80%, ctx=1001, majf=0, minf=1 00:19:02.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 issued rwts: total=487,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.329 job2: (groupid=0, jobs=1): err= 0: pid=2381476: Sat Jun 8 21:14:40 2024 00:19:02.329 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1026msec) 00:19:02.329 slat (nsec): min=26938, max=28642, avg=27339.39, stdev=442.88 00:19:02.329 clat (usec): min=959, max=42958, avg=39709.78, stdev=9685.71 00:19:02.329 lat (usec): min=986, max=42985, avg=39737.12, stdev=9685.78 00:19:02.329 clat percentiles (usec): 00:19:02.329 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41681], 00:19:02.329 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:02.329 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:19:02.329 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:02.329 | 99.99th=[42730] 00:19:02.329 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:19:02.329 slat (nsec): min=9053, max=64256, avg=30734.58, stdev=9743.70 00:19:02.329 clat (usec): min=203, max=1838, avg=562.76, stdev=137.28 00:19:02.329 lat (usec): min=218, max=1872, avg=593.49, stdev=140.29 00:19:02.329 clat percentiles (usec): 00:19:02.329 | 1.00th=[ 251], 5.00th=[ 347], 10.00th=[ 404], 20.00th=[ 465], 00:19:02.329 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:19:02.329 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 725], 00:19:02.329 | 99.00th=[ 832], 99.50th=[ 1139], 99.90th=[ 1844], 99.95th=[ 1844], 00:19:02.329 | 99.99th=[ 1844] 00:19:02.329 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.329 lat (usec) : 250=0.94%, 500=25.28%, 750=67.55%, 1000=2.45% 00:19:02.329 lat (msec) : 2=0.57%, 50=3.21% 00:19:02.329 cpu : usr=0.88%, sys=2.15%, ctx=532, majf=0, minf=1 00:19:02.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.329 job3: (groupid=0, jobs=1): err= 0: pid=2381484: Sat Jun 8 21:14:40 2024 00:19:02.329 read: IOPS=15, BW=63.9KiB/s (65.5kB/s)(64.0KiB/1001msec) 00:19:02.329 slat (nsec): min=8181, max=46968, avg=24983.06, stdev=8247.27 00:19:02.329 clat (usec): min=1140, max=42106, avg=31748.56, stdev=18217.99 00:19:02.329 lat (usec): min=1165, max=42132, avg=31773.54, stdev=18222.24 00:19:02.329 clat percentiles (usec): 00:19:02.329 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[ 1156], 20.00th=[ 1254], 00:19:02.329 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:19:02.329 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:02.329 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:02.329 | 99.99th=[42206] 00:19:02.329 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:02.329 slat (nsec): min=9818, max=69474, avg=32405.92, stdev=4962.45 00:19:02.329 clat (usec): min=405, max=1232, avg=916.98, stdev=88.89 00:19:02.329 lat (usec): min=415, max=1266, avg=949.39, stdev=89.78 00:19:02.329 clat percentiles (usec): 00:19:02.329 | 1.00th=[ 676], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 848], 00:19:02.329 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 947], 00:19:02.329 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1045], 00:19:02.329 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:19:02.329 | 99.99th=[ 1237] 00:19:02.329 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.329 lat (usec) : 500=0.19%, 750=3.60%, 1000=80.68% 00:19:02.329 lat (msec) : 2=13.26%, 50=2.27% 00:19:02.329 cpu : usr=0.90%, sys=1.60%, ctx=531, majf=0, minf=1 00:19:02.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.329 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.329 00:19:02.329 Run status group 0 (all jobs): 00:19:02.329 READ: bw=3579KiB/s (3665kB/s), 63.9KiB/s-1946KiB/s (65.5kB/s-1993kB/s), io=3672KiB (3760kB), run=1001-1026msec 00:19:02.329 WRITE: bw=7984KiB/s (8176kB/s), 1996KiB/s-2046KiB/s (2044kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1026msec 00:19:02.329 00:19:02.329 Disk stats (read/write): 00:19:02.329 nvme0n1: ios=290/512, merge=0/0, ticks=1139/389, in_queue=1528, util=84.27% 00:19:02.329 nvme0n2: ios=378/512, merge=0/0, ticks=1267/326, in_queue=1593, util=88.19% 00:19:02.329 nvme0n3: ios=63/512, merge=0/0, ticks=641/212, in_queue=853, util=95.47% 00:19:02.329 nvme0n4: ios=59/512, merge=0/0, ticks=631/433, in_queue=1064, util=97.23% 00:19:02.329 21:14:40 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:02.329 [global] 00:19:02.329 thread=1 00:19:02.329 invalidate=1 00:19:02.329 rw=write 00:19:02.329 time_based=1 00:19:02.329 runtime=1 00:19:02.329 ioengine=libaio 00:19:02.329 direct=1 00:19:02.329 bs=4096 00:19:02.329 iodepth=128 00:19:02.329 norandommap=0 00:19:02.329 numjobs=1 00:19:02.329 00:19:02.329 verify_dump=1 00:19:02.329 verify_backlog=512 00:19:02.329 verify_state_save=0 00:19:02.329 do_verify=1 00:19:02.329 verify=crc32c-intel 00:19:02.329 [job0] 00:19:02.329 filename=/dev/nvme0n1 00:19:02.329 [job1] 00:19:02.329 filename=/dev/nvme0n2 00:19:02.329 [job2] 00:19:02.329 filename=/dev/nvme0n3 00:19:02.329 [job3] 00:19:02.329 filename=/dev/nvme0n4 00:19:02.329 Could not set queue depth (nvme0n1) 00:19:02.329 Could not set queue depth (nvme0n2) 00:19:02.329 Could not set queue depth (nvme0n3) 00:19:02.329 Could not set queue depth (nvme0n4) 00:19:02.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.588 fio-3.35 00:19:02.588 Starting 4 threads 00:19:03.979 00:19:03.979 job0: (groupid=0, jobs=1): err= 0: pid=2381926: Sat Jun 8 21:14:41 2024 00:19:03.979 read: IOPS=7013, BW=27.4MiB/s (28.7MB/s)(27.6MiB/1008msec) 00:19:03.979 slat (nsec): min=881, max=14787k, avg=67180.66, stdev=507296.82 00:19:03.979 clat (usec): min=1493, max=30826, avg=9115.32, stdev=4063.85 00:19:03.979 lat (usec): min=1499, max=30888, avg=9182.50, stdev=4095.48 00:19:03.979 clat percentiles (usec): 00:19:03.979 | 1.00th=[ 2573], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6259], 00:19:03.979 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7898], 60.00th=[ 8455], 00:19:03.979 | 70.00th=[ 9634], 80.00th=[11600], 90.00th=[16057], 95.00th=[17171], 00:19:03.979 | 99.00th=[21627], 99.50th=[23462], 99.90th=[26346], 99.95th=[26608], 00:19:03.979 | 99.99th=[30802] 00:19:03.979 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:19:03.979 slat (nsec): min=1567, max=14757k, avg=64908.92, stdev=425381.63 00:19:03.979 clat (usec): min=1026, max=31603, avg=8812.16, stdev=3850.27 00:19:03.979 lat (usec): min=1183, max=31607, avg=8877.07, stdev=3868.83 00:19:03.979 clat percentiles (usec): 00:19:03.979 | 1.00th=[ 3425], 5.00th=[ 4490], 10.00th=[ 5407], 20.00th=[ 6194], 00:19:03.979 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8717], 00:19:03.979 | 70.00th=[ 9503], 80.00th=[10814], 90.00th=[14091], 95.00th=[16581], 00:19:03.979 | 99.00th=[24773], 99.50th=[26870], 99.90th=[31327], 99.95th=[31327], 00:19:03.979 | 99.99th=[31589] 00:19:03.979 bw ( KiB/s): min=26128, max=31216, per=36.40%, avg=28672.00, stdev=3597.76, samples=2 00:19:03.979 iops : min= 6532, max= 7804, avg=7168.00, stdev=899.44, samples=2 00:19:03.979 lat (msec) : 2=0.50%, 4=1.73%, 10=71.36%, 20=24.29%, 50=2.12% 00:19:03.979 cpu : usr=3.77%, sys=5.46%, ctx=720, majf=0, minf=1 00:19:03.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:03.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.979 issued rwts: total=7070,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.979 job1: (groupid=0, jobs=1): err= 0: pid=2381944: Sat Jun 8 21:14:41 2024 00:19:03.979 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:19:03.979 slat (nsec): min=925, max=22113k, avg=78026.95, stdev=714629.98 00:19:03.979 clat (usec): min=1558, max=47523, avg=11226.21, stdev=7815.46 00:19:03.979 lat (usec): min=1662, max=47558, avg=11304.24, stdev=7890.04 00:19:03.979 clat percentiles (usec): 00:19:03.979 | 1.00th=[ 2409], 5.00th=[ 3654], 10.00th=[ 4883], 20.00th=[ 5735], 00:19:03.979 | 30.00th=[ 6194], 40.00th=[ 6783], 50.00th=[ 7767], 60.00th=[ 9372], 00:19:03.979 | 70.00th=[11207], 80.00th=[17957], 90.00th=[25822], 95.00th=[27657], 00:19:03.979 | 99.00th=[30016], 99.50th=[33424], 99.90th=[46400], 99.95th=[46400], 00:19:03.979 | 99.99th=[47449] 00:19:03.979 write: IOPS=5452, BW=21.3MiB/s (22.3MB/s)(21.5MiB/1008msec); 0 zone resets 00:19:03.979 slat (nsec): min=1616, max=16042k, avg=82020.03, stdev=490628.80 00:19:03.979 clat (usec): min=1165, max=31569, avg=12663.33, stdev=6032.42 00:19:03.979 lat (usec): min=1181, max=37313, avg=12745.35, stdev=6061.01 00:19:03.979 clat percentiles (usec): 00:19:03.980 | 1.00th=[ 2868], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 7111], 00:19:03.980 | 30.00th=[ 8029], 40.00th=[ 9765], 50.00th=[11731], 60.00th=[13960], 00:19:03.980 | 70.00th=[16319], 80.00th=[18220], 90.00th=[21103], 95.00th=[22676], 00:19:03.980 | 99.00th=[29754], 99.50th=[30016], 99.90th=[31589], 99.95th=[31589], 00:19:03.980 | 99.99th=[31589] 00:19:03.980 bw ( KiB/s): min=20480, max=22472, per=27.27%, avg=21476.00, stdev=1408.56, samples=2 00:19:03.980 iops : min= 5120, max= 5618, avg=5369.00, stdev=352.14, samples=2 00:19:03.980 lat (msec) : 2=0.41%, 4=3.84%, 10=48.34%, 20=33.13%, 50=14.27% 00:19:03.980 cpu : usr=3.38%, sys=6.06%, ctx=776, majf=0, minf=1 00:19:03.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:03.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.980 issued rwts: total=5120,5496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.980 job2: (groupid=0, jobs=1): err= 0: pid=2381965: Sat Jun 8 21:14:41 2024 00:19:03.980 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec) 00:19:03.980 slat (nsec): min=977, max=20437k, avg=139044.94, stdev=1006662.85 00:19:03.980 clat (usec): min=6159, max=58483, avg=17567.14, stdev=7528.38 00:19:03.980 lat (usec): min=6161, max=58491, avg=17706.18, stdev=7598.62 00:19:03.980 clat percentiles (usec): 00:19:03.980 | 1.00th=[ 7177], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[10552], 00:19:03.980 | 30.00th=[12125], 40.00th=[14746], 50.00th=[17171], 60.00th=[18482], 00:19:03.980 | 70.00th=[20841], 80.00th=[23725], 90.00th=[27395], 95.00th=[29754], 00:19:03.980 | 99.00th=[41681], 99.50th=[45876], 99.90th=[58459], 99.95th=[58459], 00:19:03.980 | 99.99th=[58459] 00:19:03.980 write: IOPS=3826, BW=14.9MiB/s (15.7MB/s)(15.2MiB/1018msec); 0 zone resets 00:19:03.980 slat (nsec): min=1653, max=13264k, avg=123887.61, stdev=696340.65 00:19:03.980 clat (usec): min=2501, max=69517, avg=16916.11, stdev=12496.86 00:19:03.980 lat (usec): min=2510, max=73482, avg=17040.00, stdev=12561.65 00:19:03.980 clat percentiles (usec): 00:19:03.980 | 1.00th=[ 4359], 5.00th=[ 6456], 10.00th=[ 7767], 20.00th=[ 8586], 00:19:03.980 | 30.00th=[10159], 40.00th=[11600], 50.00th=[13435], 60.00th=[15008], 00:19:03.980 | 70.00th=[17171], 80.00th=[19792], 90.00th=[28967], 95.00th=[51119], 00:19:03.980 | 99.00th=[65274], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:19:03.980 | 99.99th=[69731] 00:19:03.980 bw ( KiB/s): min=13264, max=16872, per=19.13%, avg=15068.00, stdev=2551.24, samples=2 00:19:03.980 iops : min= 3316, max= 4218, avg=3767.00, stdev=637.81, samples=2 00:19:03.980 lat (msec) : 4=0.41%, 10=22.74%, 20=50.47%, 50=23.21%, 100=3.16% 00:19:03.980 cpu : usr=3.34%, sys=3.54%, ctx=372, majf=0, minf=1 00:19:03.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:03.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.980 issued rwts: total=3584,3895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.980 job3: (groupid=0, jobs=1): err= 0: pid=2381971: Sat Jun 8 21:14:41 2024 00:19:03.980 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:19:03.980 slat (nsec): min=954, max=17516k, avg=130284.73, stdev=930556.32 00:19:03.980 clat (usec): min=4728, max=68766, avg=16415.73, stdev=10894.25 00:19:03.980 lat (usec): min=4730, max=68768, avg=16546.01, stdev=10981.26 00:19:03.980 clat percentiles (usec): 00:19:03.980 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 8586], 00:19:03.980 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12780], 60.00th=[15008], 00:19:03.980 | 70.00th=[17433], 80.00th=[22414], 90.00th=[28443], 95.00th=[35914], 00:19:03.980 | 99.00th=[65274], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:19:03.980 | 99.99th=[68682] 00:19:03.980 write: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(13.6MiB/1018msec); 0 zone resets 00:19:03.980 slat (nsec): min=1693, max=15056k, avg=162782.94, stdev=889872.06 00:19:03.980 clat (usec): min=2607, max=98373, avg=22251.10, stdev=22405.99 00:19:03.980 lat (usec): min=2618, max=98382, avg=22413.88, stdev=22550.97 00:19:03.980 clat percentiles (usec): 00:19:03.980 | 1.00th=[ 3359], 5.00th=[ 4686], 10.00th=[ 6390], 20.00th=[ 8029], 00:19:03.980 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11863], 60.00th=[13304], 00:19:03.980 | 70.00th=[15926], 80.00th=[41681], 90.00th=[62653], 95.00th=[70779], 00:19:03.980 | 99.00th=[91751], 99.50th=[92799], 99.90th=[98042], 99.95th=[98042], 00:19:03.980 | 99.99th=[98042] 00:19:03.980 bw ( KiB/s): min= 6384, max=20480, per=17.05%, avg=13432.00, stdev=9967.38, samples=2 00:19:03.980 iops : min= 1596, max= 5120, avg=3358.00, stdev=2491.84, samples=2 00:19:03.980 lat (msec) : 4=0.79%, 10=32.19%, 20=41.77%, 50=15.66%, 100=9.59% 00:19:03.980 cpu : usr=2.06%, sys=4.13%, ctx=344, majf=0, minf=1 00:19:03.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:03.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.980 issued rwts: total=3072,3486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.980 00:19:03.980 Run status group 0 (all jobs): 00:19:03.980 READ: bw=72.3MiB/s (75.8MB/s), 11.8MiB/s-27.4MiB/s (12.4MB/s-28.7MB/s), io=73.6MiB (77.2MB), run=1008-1018msec 00:19:03.980 WRITE: bw=76.9MiB/s (80.7MB/s), 13.4MiB/s-27.8MiB/s (14.0MB/s-29.1MB/s), io=78.3MiB (82.1MB), run=1008-1018msec 00:19:03.980 00:19:03.980 Disk stats (read/write): 00:19:03.980 nvme0n1: ios=6024/6144, merge=0/0, ticks=33333/31591, in_queue=64924, util=84.17% 00:19:03.980 nvme0n2: ios=3957/4096, merge=0/0, ticks=47472/53273, in_queue=100745, util=90.83% 00:19:03.980 nvme0n3: ios=3132/3296, merge=0/0, ticks=53462/50448, in_queue=103910, util=92.62% 00:19:03.980 nvme0n4: ios=3085/3072, merge=0/0, ticks=48300/53501, in_queue=101801, util=96.91% 00:19:03.980 21:14:41 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:03.980 [global] 00:19:03.980 thread=1 00:19:03.980 invalidate=1 00:19:03.980 rw=randwrite 00:19:03.980 time_based=1 00:19:03.980 runtime=1 00:19:03.980 ioengine=libaio 00:19:03.980 direct=1 00:19:03.980 bs=4096 00:19:03.980 iodepth=128 00:19:03.980 norandommap=0 00:19:03.980 numjobs=1 00:19:03.980 00:19:03.980 verify_dump=1 00:19:03.980 verify_backlog=512 00:19:03.980 verify_state_save=0 00:19:03.980 do_verify=1 00:19:03.980 verify=crc32c-intel 00:19:03.980 [job0] 00:19:03.980 filename=/dev/nvme0n1 00:19:03.980 [job1] 00:19:03.980 filename=/dev/nvme0n2 00:19:03.980 [job2] 00:19:03.980 filename=/dev/nvme0n3 00:19:03.980 [job3] 00:19:03.980 filename=/dev/nvme0n4 00:19:03.980 Could not set queue depth (nvme0n1) 00:19:03.980 Could not set queue depth (nvme0n2) 00:19:03.980 Could not set queue depth (nvme0n3) 00:19:03.980 Could not set queue depth (nvme0n4) 00:19:04.240 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.240 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.240 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.240 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:04.240 fio-3.35 00:19:04.240 Starting 4 threads 00:19:05.624 00:19:05.624 job0: (groupid=0, jobs=1): err= 0: pid=2382407: Sat Jun 8 21:14:43 2024 00:19:05.624 read: IOPS=7257, BW=28.3MiB/s (29.7MB/s)(28.5MiB/1005msec) 00:19:05.624 slat (nsec): min=856, max=13040k, avg=64856.13, stdev=506112.10 00:19:05.624 clat (usec): min=1243, max=26048, avg=8619.38, stdev=3733.01 00:19:05.624 lat (usec): min=2383, max=26056, avg=8684.24, stdev=3758.24 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 3261], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6194], 00:19:05.624 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 7963], 00:19:05.624 | 70.00th=[ 8848], 80.00th=[10290], 90.00th=[13698], 95.00th=[16188], 00:19:05.624 | 99.00th=[23200], 99.50th=[24773], 99.90th=[25560], 99.95th=[26084], 00:19:05.624 | 99.99th=[26084] 00:19:05.624 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:19:05.624 slat (nsec): min=1468, max=7350.7k, avg=60670.69, stdev=391680.77 00:19:05.624 clat (usec): min=1438, max=31215, avg=8417.35, stdev=4266.60 00:19:05.624 lat (usec): min=1446, max=31223, avg=8478.02, stdev=4290.03 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 2245], 5.00th=[ 3523], 10.00th=[ 4621], 20.00th=[ 5669], 00:19:05.624 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 7242], 60.00th=[ 7898], 00:19:05.624 | 70.00th=[ 9110], 80.00th=[11207], 90.00th=[13304], 95.00th=[16057], 00:19:05.624 | 99.00th=[25560], 99.50th=[28181], 99.90th=[30540], 99.95th=[30802], 00:19:05.624 | 99.99th=[31327] 00:19:05.624 bw ( KiB/s): min=24526, max=36848, per=31.15%, avg=30687.00, stdev=8712.97, samples=2 00:19:05.624 iops : min= 6131, max= 9212, avg=7671.50, stdev=2178.60, samples=2 00:19:05.624 lat (msec) : 2=0.27%, 4=3.98%, 10=72.76%, 20=20.28%, 50=2.70% 00:19:05.624 cpu : usr=4.08%, sys=6.37%, ctx=630, majf=0, minf=1 00:19:05.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:05.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.624 issued rwts: total=7294,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.624 job1: (groupid=0, jobs=1): err= 0: pid=2382414: Sat Jun 8 21:14:43 2024 00:19:05.624 read: IOPS=7203, BW=28.1MiB/s (29.5MB/s)(28.2MiB/1004msec) 00:19:05.624 slat (nsec): min=913, max=10612k, avg=68367.76, stdev=469332.83 00:19:05.624 clat (usec): min=2875, max=28639, avg=8872.01, stdev=3046.04 00:19:05.624 lat (usec): min=3556, max=28664, avg=8940.38, stdev=3090.47 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 5342], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7046], 00:19:05.624 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8225], 00:19:05.624 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[12387], 95.00th=[15008], 00:19:05.624 | 99.00th=[20841], 99.50th=[23725], 99.90th=[24773], 99.95th=[25297], 00:19:05.624 | 99.99th=[28705] 00:19:05.624 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:19:05.624 slat (nsec): min=1538, max=7175.6k, avg=61661.32, stdev=342428.69 00:19:05.624 clat (usec): min=3763, max=21809, avg=8168.82, stdev=2274.62 00:19:05.624 lat (usec): min=3791, max=22315, avg=8230.48, stdev=2296.41 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6718], 00:19:05.624 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:19:05.624 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[11076], 95.00th=[12387], 00:19:05.624 | 99.00th=[16188], 99.50th=[19268], 99.90th=[21890], 99.95th=[21890], 00:19:05.624 | 99.99th=[21890] 00:19:05.624 bw ( KiB/s): min=28168, max=32702, per=30.89%, avg=30435.00, stdev=3206.02, samples=2 00:19:05.624 iops : min= 7042, max= 8175, avg=7608.50, stdev=801.15, samples=2 00:19:05.624 lat (msec) : 4=0.33%, 10=79.48%, 20=19.12%, 50=1.07% 00:19:05.624 cpu : usr=4.09%, sys=7.18%, ctx=695, majf=0, minf=1 00:19:05.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:05.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.624 issued rwts: total=7232,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.624 job2: (groupid=0, jobs=1): err= 0: pid=2382433: Sat Jun 8 21:14:43 2024 00:19:05.624 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:19:05.624 slat (nsec): min=921, max=48872k, avg=142121.22, stdev=1478023.01 00:19:05.624 clat (usec): min=5036, max=66458, avg=20191.52, stdev=15219.42 00:19:05.624 lat (usec): min=5047, max=66481, avg=20333.64, stdev=15312.72 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 5473], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9634], 00:19:05.624 | 30.00th=[10683], 40.00th=[12387], 50.00th=[14353], 60.00th=[15926], 00:19:05.624 | 70.00th=[18744], 80.00th=[26346], 90.00th=[50594], 95.00th=[57410], 00:19:05.624 | 99.00th=[61080], 99.50th=[61080], 99.90th=[64226], 99.95th=[64750], 00:19:05.624 | 99.99th=[66323] 00:19:05.624 write: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1011msec); 0 zone resets 00:19:05.624 slat (nsec): min=1501, max=27414k, avg=100348.88, stdev=774435.84 00:19:05.624 clat (usec): min=1217, max=70061, avg=14523.85, stdev=8234.24 00:19:05.624 lat (usec): min=1228, max=70063, avg=14624.20, stdev=8267.44 00:19:05.624 clat percentiles (usec): 00:19:05.624 | 1.00th=[ 2180], 5.00th=[ 3884], 10.00th=[ 6849], 20.00th=[ 8455], 00:19:05.624 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12911], 60.00th=[14353], 00:19:05.624 | 70.00th=[16319], 80.00th=[18482], 90.00th=[24511], 95.00th=[32900], 00:19:05.624 | 99.00th=[40633], 99.50th=[50070], 99.90th=[61080], 99.95th=[61080], 00:19:05.624 | 99.99th=[69731] 00:19:05.624 bw ( KiB/s): min=13400, max=16384, per=15.11%, avg=14892.00, stdev=2110.01, samples=2 00:19:05.624 iops : min= 3350, max= 4096, avg=3723.00, stdev=527.50, samples=2 00:19:05.624 lat (msec) : 2=0.05%, 4=2.72%, 10=23.46%, 20=52.03%, 50=16.60% 00:19:05.624 lat (msec) : 100=5.14% 00:19:05.624 cpu : usr=2.97%, sys=3.86%, ctx=318, majf=0, minf=1 00:19:05.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:05.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.625 issued rwts: total=3584,3850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.625 job3: (groupid=0, jobs=1): err= 0: pid=2382440: Sat Jun 8 21:14:43 2024 00:19:05.625 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:19:05.625 slat (nsec): min=926, max=7410.1k, avg=90419.87, stdev=567670.26 00:19:05.625 clat (usec): min=5438, max=26075, avg=11629.78, stdev=2327.49 00:19:05.625 lat (usec): min=5454, max=26101, avg=11720.20, stdev=2389.57 00:19:05.625 clat percentiles (usec): 00:19:05.625 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10290], 00:19:05.625 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:19:05.625 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14353], 95.00th=[15664], 00:19:05.625 | 99.00th=[21103], 99.50th=[21627], 99.90th=[23462], 99.95th=[23462], 00:19:05.625 | 99.99th=[26084] 00:19:05.625 write: IOPS=5659, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1006msec); 0 zone resets 00:19:05.625 slat (nsec): min=1578, max=6041.1k, avg=78275.69, stdev=421344.37 00:19:05.625 clat (usec): min=1220, max=29144, avg=10909.62, stdev=3362.77 00:19:05.625 lat (usec): min=1228, max=29146, avg=10987.89, stdev=3386.89 00:19:05.625 clat percentiles (usec): 00:19:05.625 | 1.00th=[ 2737], 5.00th=[ 6259], 10.00th=[ 7570], 20.00th=[ 8717], 00:19:05.625 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:19:05.625 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13829], 95.00th=[15270], 00:19:05.625 | 99.00th=[25035], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:19:05.625 | 99.99th=[29230] 00:19:05.625 bw ( KiB/s): min=20480, max=24576, per=22.86%, avg=22528.00, stdev=2896.31, samples=2 00:19:05.625 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:19:05.625 lat (msec) : 2=0.32%, 4=0.43%, 10=24.29%, 20=73.08%, 50=1.88% 00:19:05.625 cpu : usr=3.48%, sys=5.97%, ctx=659, majf=0, minf=1 00:19:05.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:05.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.625 issued rwts: total=5632,5693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.625 00:19:05.625 Run status group 0 (all jobs): 00:19:05.625 READ: bw=91.7MiB/s (96.2MB/s), 13.8MiB/s-28.3MiB/s (14.5MB/s-29.7MB/s), io=92.7MiB (97.2MB), run=1004-1011msec 00:19:05.625 WRITE: bw=96.2MiB/s (101MB/s), 14.9MiB/s-29.9MiB/s (15.6MB/s-31.3MB/s), io=97.3MiB (102MB), run=1004-1011msec 00:19:05.625 00:19:05.625 Disk stats (read/write): 00:19:05.625 nvme0n1: ios=6191/6144, merge=0/0, ticks=51211/49749, in_queue=100960, util=87.98% 00:19:05.625 nvme0n2: ios=6167/6207, merge=0/0, ticks=27349/23453, in_queue=50802, util=92.56% 00:19:05.625 nvme0n3: ios=2716/3072, merge=0/0, ticks=36596/31669, in_queue=68265, util=95.26% 00:19:05.625 nvme0n4: ios=4720/5120, merge=0/0, ticks=26895/27604, in_queue=54499, util=96.70% 00:19:05.625 21:14:43 -- target/fio.sh@55 -- # sync 00:19:05.625 21:14:43 -- target/fio.sh@59 -- # fio_pid=2382580 00:19:05.625 21:14:43 -- target/fio.sh@61 -- # sleep 3 00:19:05.625 21:14:43 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:05.625 [global] 00:19:05.625 thread=1 00:19:05.625 invalidate=1 00:19:05.625 rw=read 00:19:05.625 time_based=1 00:19:05.625 runtime=10 00:19:05.625 ioengine=libaio 00:19:05.625 direct=1 00:19:05.625 bs=4096 00:19:05.625 iodepth=1 00:19:05.625 norandommap=1 00:19:05.625 numjobs=1 00:19:05.625 00:19:05.625 [job0] 00:19:05.625 filename=/dev/nvme0n1 00:19:05.625 [job1] 00:19:05.625 filename=/dev/nvme0n2 00:19:05.625 [job2] 00:19:05.625 filename=/dev/nvme0n3 00:19:05.625 [job3] 00:19:05.625 filename=/dev/nvme0n4 00:19:05.625 Could not set queue depth (nvme0n1) 00:19:05.625 Could not set queue depth (nvme0n2) 00:19:05.625 Could not set queue depth (nvme0n3) 00:19:05.625 Could not set queue depth (nvme0n4) 00:19:05.885 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.885 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.885 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.885 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.885 fio-3.35 00:19:05.885 Starting 4 threads 00:19:08.427 21:14:46 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:08.427 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6615040, buflen=4096 00:19:08.427 fio: pid=2382948, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:08.427 21:14:46 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:08.687 21:14:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.687 21:14:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:08.687 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=585728, buflen=4096 00:19:08.687 fio: pid=2382943, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:08.947 21:14:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.947 21:14:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:08.947 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2076672, buflen=4096 00:19:08.947 fio: pid=2382902, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:08.947 21:14:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.947 21:14:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:08.947 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=303104, buflen=4096 00:19:08.947 fio: pid=2382918, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:09.209 00:19:09.209 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2382902: Sat Jun 8 21:14:47 2024 00:19:09.209 read: IOPS=176, BW=703KiB/s (720kB/s)(2028KiB/2885msec) 00:19:09.209 slat (usec): min=6, max=15431, avg=39.92, stdev=684.27 00:19:09.209 clat (usec): min=947, max=42303, avg=5605.53, stdev=12692.93 00:19:09.209 lat (usec): min=954, max=57000, avg=5645.48, stdev=12802.31 00:19:09.209 clat percentiles (usec): 00:19:09.209 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1156], 00:19:09.209 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1188], 60.00th=[ 1205], 00:19:09.209 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[41681], 95.00th=[42206], 00:19:09.209 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.209 | 99.99th=[42206] 00:19:09.209 bw ( KiB/s): min= 96, max= 3368, per=25.94%, avg=796.80, stdev=1439.47, samples=5 00:19:09.209 iops : min= 24, max= 842, avg=199.20, stdev=359.87, samples=5 00:19:09.209 lat (usec) : 1000=0.59% 00:19:09.209 lat (msec) : 2=88.39%, 50=10.83% 00:19:09.209 cpu : usr=0.07%, sys=0.24%, ctx=509, majf=0, minf=1 00:19:09.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.209 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2382918: Sat Jun 8 21:14:47 2024 00:19:09.209 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(296KiB/3049msec) 00:19:09.209 slat (usec): min=25, max=9466, avg=152.22, stdev=1090.09 00:19:09.209 clat (usec): min=1331, max=42154, avg=40856.79, stdev=6623.21 00:19:09.209 lat (usec): min=1357, max=50986, avg=41010.64, stdev=6725.42 00:19:09.209 clat percentiles (usec): 00:19:09.209 | 1.00th=[ 1336], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:09.209 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:09.209 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:09.209 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.209 | 99.99th=[42206] 00:19:09.209 bw ( KiB/s): min= 96, max= 104, per=3.16%, avg=97.33, stdev= 3.27, samples=6 00:19:09.209 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:19:09.209 lat (msec) : 2=2.67%, 50=96.00% 00:19:09.209 cpu : usr=0.00%, sys=0.13%, ctx=76, majf=0, minf=1 00:19:09.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.209 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2382943: Sat Jun 8 21:14:47 2024 00:19:09.209 read: IOPS=52, BW=210KiB/s (215kB/s)(572KiB/2730msec) 00:19:09.209 slat (nsec): min=3671, max=33298, avg=17012.00, stdev=8967.46 00:19:09.209 clat (usec): min=513, max=41487, avg=18923.19, stdev=19924.37 00:19:09.209 lat (usec): min=546, max=41495, avg=18940.14, stdev=19932.18 00:19:09.209 clat percentiles (usec): 00:19:09.209 | 1.00th=[ 701], 5.00th=[ 955], 10.00th=[ 1004], 20.00th=[ 1037], 00:19:09.209 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1139], 60.00th=[41157], 00:19:09.209 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:09.209 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:09.209 | 99.99th=[41681] 00:19:09.209 bw ( KiB/s): min= 96, max= 704, per=7.14%, avg=219.20, stdev=271.03, samples=5 00:19:09.209 iops : min= 24, max= 176, avg=54.80, stdev=67.76, samples=5 00:19:09.209 lat (usec) : 750=1.39%, 1000=6.25% 00:19:09.209 lat (msec) : 2=47.22%, 50=44.44% 00:19:09.209 cpu : usr=0.00%, sys=0.18%, ctx=144, majf=0, minf=1 00:19:09.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 issued rwts: total=144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.209 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2382948: Sat Jun 8 21:14:47 2024 00:19:09.209 read: IOPS=635, BW=2539KiB/s (2600kB/s)(6460KiB/2544msec) 00:19:09.209 slat (nsec): min=6978, max=59755, avg=26196.18, stdev=4663.26 00:19:09.209 clat (usec): min=587, max=42173, avg=1525.71, stdev=4296.85 00:19:09.209 lat (usec): min=613, max=42199, avg=1551.90, stdev=4296.91 00:19:09.209 clat percentiles (usec): 00:19:09.209 | 1.00th=[ 816], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1020], 00:19:09.209 | 30.00th=[ 1037], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:19:09.209 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:19:09.209 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:09.209 | 99.99th=[42206] 00:19:09.209 bw ( KiB/s): min= 88, max= 3752, per=83.56%, avg=2564.80, stdev=1581.62, samples=5 00:19:09.209 iops : min= 22, max= 938, avg=641.20, stdev=395.41, samples=5 00:19:09.209 lat (usec) : 750=0.31%, 1000=12.25% 00:19:09.209 lat (msec) : 2=86.20%, 4=0.06%, 50=1.11% 00:19:09.209 cpu : usr=0.43%, sys=2.20%, ctx=1617, majf=0, minf=2 00:19:09.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.209 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:09.209 00:19:09.209 Run status group 0 (all jobs): 00:19:09.209 READ: bw=3069KiB/s (3142kB/s), 97.1KiB/s-2539KiB/s (99.4kB/s-2600kB/s), io=9356KiB (9581kB), run=2544-3049msec 00:19:09.209 00:19:09.209 Disk stats (read/write): 00:19:09.209 nvme0n1: ios=505/0, merge=0/0, ticks=2754/0, in_queue=2754, util=92.75% 00:19:09.209 nvme0n2: ios=74/0, merge=0/0, ticks=3024/0, in_queue=3024, util=94.34% 00:19:09.209 nvme0n3: ios=138/0, merge=0/0, ticks=2500/0, in_queue=2500, util=95.60% 00:19:09.209 nvme0n4: ios=1659/0, merge=0/0, ticks=3557/0, in_queue=3557, util=98.83% 00:19:09.209 21:14:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.209 21:14:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:09.470 21:14:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.471 21:14:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:09.471 21:14:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.471 21:14:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:09.732 21:14:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.732 21:14:47 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:09.732 21:14:47 -- target/fio.sh@69 -- # fio_status=0 00:19:09.732 21:14:47 -- target/fio.sh@70 -- # wait 2382580 00:19:09.732 21:14:47 -- target/fio.sh@70 -- # fio_status=4 00:19:09.732 21:14:47 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:09.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.993 21:14:47 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:09.993 21:14:47 -- common/autotest_common.sh@1198 -- # local i=0 00:19:09.993 21:14:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:09.993 21:14:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.993 21:14:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:09.993 21:14:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.993 21:14:47 -- common/autotest_common.sh@1210 -- # return 0 00:19:09.993 21:14:47 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:09.993 21:14:47 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:09.993 nvmf hotplug test: fio failed as expected 00:19:09.993 21:14:47 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.993 21:14:48 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:09.993 21:14:48 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:09.993 21:14:48 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:09.993 21:14:48 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:09.993 21:14:48 -- target/fio.sh@91 -- # nvmftestfini 00:19:09.993 21:14:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:09.993 21:14:48 -- nvmf/common.sh@116 -- # sync 00:19:09.993 21:14:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:09.993 21:14:48 -- nvmf/common.sh@119 -- # set +e 00:19:09.993 21:14:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:09.993 21:14:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:09.993 rmmod nvme_tcp 00:19:10.255 rmmod nvme_fabrics 00:19:10.255 rmmod nvme_keyring 00:19:10.255 21:14:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:10.255 21:14:48 -- nvmf/common.sh@123 -- # set -e 00:19:10.255 21:14:48 -- nvmf/common.sh@124 -- # return 0 00:19:10.255 21:14:48 -- nvmf/common.sh@477 -- # '[' -n 2379127 ']' 00:19:10.255 21:14:48 -- nvmf/common.sh@478 -- # killprocess 2379127 00:19:10.255 21:14:48 -- common/autotest_common.sh@926 -- # '[' -z 2379127 ']' 00:19:10.255 21:14:48 -- common/autotest_common.sh@930 -- # kill -0 2379127 00:19:10.255 21:14:48 -- common/autotest_common.sh@931 -- # uname 00:19:10.255 21:14:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:10.255 21:14:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2379127 00:19:10.255 21:14:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:10.255 21:14:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:10.255 21:14:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2379127' 00:19:10.255 killing process with pid 2379127 00:19:10.255 21:14:48 -- common/autotest_common.sh@945 -- # kill 2379127 00:19:10.255 21:14:48 -- common/autotest_common.sh@950 -- # wait 2379127 00:19:10.515 21:14:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:10.515 21:14:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:10.515 21:14:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:10.515 21:14:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.515 21:14:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:10.515 21:14:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.515 21:14:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.515 21:14:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.430 21:14:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:12.430 00:19:12.430 real 0m28.106s 00:19:12.430 user 2m32.254s 00:19:12.430 sys 0m9.094s 00:19:12.430 21:14:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.430 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 ************************************ 00:19:12.430 END TEST nvmf_fio_target 00:19:12.430 ************************************ 00:19:12.430 21:14:50 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:12.430 21:14:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:12.430 21:14:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:12.430 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 ************************************ 00:19:12.430 START TEST nvmf_bdevio 00:19:12.430 ************************************ 00:19:12.430 21:14:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:12.692 * Looking for test storage... 00:19:12.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.692 21:14:50 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.692 21:14:50 -- nvmf/common.sh@7 -- # uname -s 00:19:12.692 21:14:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.692 21:14:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.692 21:14:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.692 21:14:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.692 21:14:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.692 21:14:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.692 21:14:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.692 21:14:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.692 21:14:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.692 21:14:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.692 21:14:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.692 21:14:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.692 21:14:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.692 21:14:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.692 21:14:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.692 21:14:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.692 21:14:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.692 21:14:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.692 21:14:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.692 21:14:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.692 21:14:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.692 21:14:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.692 21:14:50 -- paths/export.sh@5 -- # export PATH 00:19:12.692 21:14:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.692 21:14:50 -- nvmf/common.sh@46 -- # : 0 00:19:12.692 21:14:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:12.692 21:14:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:12.692 21:14:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:12.692 21:14:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.692 21:14:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.692 21:14:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:12.692 21:14:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:12.692 21:14:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:12.692 21:14:50 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.692 21:14:50 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.692 21:14:50 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:12.692 21:14:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:12.692 21:14:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.692 21:14:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:12.692 21:14:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:12.692 21:14:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:12.692 21:14:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.692 21:14:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.692 21:14:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.692 21:14:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:12.692 21:14:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:12.692 21:14:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:12.692 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:19.278 21:14:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:19.278 21:14:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:19.278 21:14:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:19.278 21:14:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:19.278 21:14:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:19.278 21:14:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:19.278 21:14:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:19.278 21:14:57 -- nvmf/common.sh@294 -- # net_devs=() 00:19:19.278 21:14:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:19.278 21:14:57 -- nvmf/common.sh@295 -- # e810=() 00:19:19.278 21:14:57 -- nvmf/common.sh@295 -- # local -ga e810 00:19:19.278 21:14:57 -- nvmf/common.sh@296 -- # x722=() 00:19:19.278 21:14:57 -- nvmf/common.sh@296 -- # local -ga x722 00:19:19.278 21:14:57 -- nvmf/common.sh@297 -- # mlx=() 00:19:19.278 21:14:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:19.278 21:14:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.278 21:14:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:19.278 21:14:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:19.278 21:14:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:19.278 21:14:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.278 21:14:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:19.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:19.278 21:14:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.278 21:14:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:19.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:19.278 21:14:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:19.278 21:14:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:19.278 21:14:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.278 21:14:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.278 21:14:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.278 21:14:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.278 21:14:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:19.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:19.278 21:14:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.278 21:14:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.278 21:14:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.278 21:14:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.278 21:14:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.278 21:14:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:19.278 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:19.278 21:14:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.279 21:14:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:19.279 21:14:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:19.279 21:14:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:19.279 21:14:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:19.279 21:14:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:19.279 21:14:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.279 21:14:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.279 21:14:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.279 21:14:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:19.279 21:14:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.279 21:14:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.279 21:14:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:19.279 21:14:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.279 21:14:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.279 21:14:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:19.279 21:14:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:19.279 21:14:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.279 21:14:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.539 21:14:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.539 21:14:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.539 21:14:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:19.539 21:14:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.539 21:14:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.539 21:14:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.539 21:14:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:19.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:19:19.539 00:19:19.539 --- 10.0.0.2 ping statistics --- 00:19:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.539 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:19:19.539 21:14:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:19:19.539 00:19:19.539 --- 10.0.0.1 ping statistics --- 00:19:19.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.539 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:19:19.539 21:14:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.539 21:14:57 -- nvmf/common.sh@410 -- # return 0 00:19:19.539 21:14:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:19.539 21:14:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.539 21:14:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:19.539 21:14:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:19.539 21:14:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.539 21:14:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:19.539 21:14:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:19.539 21:14:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:19.539 21:14:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:19.539 21:14:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:19.539 21:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:19.801 21:14:57 -- nvmf/common.sh@469 -- # nvmfpid=2387842 00:19:19.801 21:14:57 -- nvmf/common.sh@470 -- # waitforlisten 2387842 00:19:19.801 21:14:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:19.801 21:14:57 -- common/autotest_common.sh@819 -- # '[' -z 2387842 ']' 00:19:19.801 21:14:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.801 21:14:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.801 21:14:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.801 21:14:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.801 21:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:19.801 [2024-06-08 21:14:57.684225] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:19.801 [2024-06-08 21:14:57.684287] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.801 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.801 [2024-06-08 21:14:57.770289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.801 [2024-06-08 21:14:57.863350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.801 [2024-06-08 21:14:57.863519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.801 [2024-06-08 21:14:57.863531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.801 [2024-06-08 21:14:57.863538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.801 [2024-06-08 21:14:57.863716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:19.801 [2024-06-08 21:14:57.863776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:19.801 [2024-06-08 21:14:57.863910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.801 [2024-06-08 21:14:57.863910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.745 21:14:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:20.745 21:14:58 -- common/autotest_common.sh@852 -- # return 0 00:19:20.745 21:14:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:20.745 21:14:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 21:14:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.745 21:14:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.745 21:14:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 [2024-06-08 21:14:58.529920] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.745 21:14:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.745 21:14:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.745 21:14:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 Malloc0 00:19:20.745 21:14:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.745 21:14:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.745 21:14:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 21:14:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.745 21:14:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.745 21:14:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 21:14:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.745 21:14:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.745 21:14:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.745 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 [2024-06-08 21:14:58.595412] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.745 21:14:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.745 21:14:58 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:20.745 21:14:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:20.745 21:14:58 -- nvmf/common.sh@520 -- # config=() 00:19:20.745 21:14:58 -- nvmf/common.sh@520 -- # local subsystem config 00:19:20.745 21:14:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:20.745 21:14:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:20.745 { 00:19:20.745 "params": { 00:19:20.745 "name": "Nvme$subsystem", 00:19:20.745 "trtype": "$TEST_TRANSPORT", 00:19:20.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.745 "adrfam": "ipv4", 00:19:20.745 "trsvcid": "$NVMF_PORT", 00:19:20.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.745 "hdgst": ${hdgst:-false}, 00:19:20.745 "ddgst": ${ddgst:-false} 00:19:20.745 }, 00:19:20.745 "method": "bdev_nvme_attach_controller" 00:19:20.745 } 00:19:20.745 EOF 00:19:20.745 )") 00:19:20.745 21:14:58 -- nvmf/common.sh@542 -- # cat 00:19:20.745 21:14:58 -- nvmf/common.sh@544 -- # jq . 00:19:20.745 21:14:58 -- nvmf/common.sh@545 -- # IFS=, 00:19:20.745 21:14:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:20.745 "params": { 00:19:20.745 "name": "Nvme1", 00:19:20.745 "trtype": "tcp", 00:19:20.745 "traddr": "10.0.0.2", 00:19:20.745 "adrfam": "ipv4", 00:19:20.745 "trsvcid": "4420", 00:19:20.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.745 "hdgst": false, 00:19:20.745 "ddgst": false 00:19:20.745 }, 00:19:20.745 "method": "bdev_nvme_attach_controller" 00:19:20.745 }' 00:19:20.745 [2024-06-08 21:14:58.648683] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:20.745 [2024-06-08 21:14:58.648753] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388171 ] 00:19:20.745 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.745 [2024-06-08 21:14:58.714604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.745 [2024-06-08 21:14:58.787460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.745 [2024-06-08 21:14:58.787720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.745 [2024-06-08 21:14:58.787724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.006 [2024-06-08 21:14:58.927529] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:21.006 [2024-06-08 21:14:58.927561] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:21.006 I/O targets: 00:19:21.006 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.006 00:19:21.006 00:19:21.006 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.006 http://cunit.sourceforge.net/ 00:19:21.006 00:19:21.006 00:19:21.006 Suite: bdevio tests on: Nvme1n1 00:19:21.006 Test: blockdev write read block ...passed 00:19:21.006 Test: blockdev write zeroes read block ...passed 00:19:21.006 Test: blockdev write zeroes read no split ...passed 00:19:21.006 Test: blockdev write zeroes read split ...passed 00:19:21.266 Test: blockdev write zeroes read split partial ...passed 00:19:21.266 Test: blockdev reset ...[2024-06-08 21:14:59.158900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.266 [2024-06-08 21:14:59.158965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d054e0 (9): Bad file descriptor 00:19:21.267 [2024-06-08 21:14:59.215367] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.267 passed 00:19:21.267 Test: blockdev write read 8 blocks ...passed 00:19:21.267 Test: blockdev write read size > 128k ...passed 00:19:21.267 Test: blockdev write read invalid size ...passed 00:19:21.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.267 Test: blockdev write read max offset ...passed 00:19:21.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.527 Test: blockdev writev readv 8 blocks ...passed 00:19:21.527 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.527 Test: blockdev writev readv block ...passed 00:19:21.527 Test: blockdev writev readv size > 128k ...passed 00:19:21.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.527 Test: blockdev comparev and writev ...[2024-06-08 21:14:59.437451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.437475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.437486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.437492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.437886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.437904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.437909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.438311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.438319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.438730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.438741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.438750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.527 [2024-06-08 21:14:59.438756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.527 passed 00:19:21.527 Test: blockdev nvme passthru rw ...passed 00:19:21.527 Test: blockdev nvme passthru vendor specific ...[2024-06-08 21:14:59.522887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.527 [2024-06-08 21:14:59.522898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.527 [2024-06-08 21:14:59.523115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.528 [2024-06-08 21:14:59.523122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.528 [2024-06-08 21:14:59.523379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.528 [2024-06-08 21:14:59.523386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.528 [2024-06-08 21:14:59.523661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.528 [2024-06-08 21:14:59.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.528 passed 00:19:21.528 Test: blockdev nvme admin passthru ...passed 00:19:21.528 Test: blockdev copy ...passed 00:19:21.528 00:19:21.528 Run Summary: Type Total Ran Passed Failed Inactive 00:19:21.528 suites 1 1 n/a 0 0 00:19:21.528 tests 23 23 23 0 0 00:19:21.528 asserts 152 152 152 0 n/a 00:19:21.528 00:19:21.528 Elapsed time = 1.313 seconds 00:19:21.829 21:14:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.829 21:14:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.829 21:14:59 -- common/autotest_common.sh@10 -- # set +x 00:19:21.829 21:14:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.829 21:14:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:21.829 21:14:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:21.829 21:14:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:21.829 21:14:59 -- nvmf/common.sh@116 -- # sync 00:19:21.829 21:14:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:21.829 21:14:59 -- nvmf/common.sh@119 -- # set +e 00:19:21.829 21:14:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:21.829 21:14:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:21.829 rmmod nvme_tcp 00:19:21.829 rmmod nvme_fabrics 00:19:21.829 rmmod nvme_keyring 00:19:21.829 21:14:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.829 21:14:59 -- nvmf/common.sh@123 -- # set -e 00:19:21.829 21:14:59 -- nvmf/common.sh@124 -- # return 0 00:19:21.829 21:14:59 -- nvmf/common.sh@477 -- # '[' -n 2387842 ']' 00:19:21.829 21:14:59 -- nvmf/common.sh@478 -- # killprocess 2387842 00:19:21.829 21:14:59 -- common/autotest_common.sh@926 -- # '[' -z 2387842 ']' 00:19:21.829 21:14:59 -- common/autotest_common.sh@930 -- # kill -0 2387842 00:19:21.829 21:14:59 -- common/autotest_common.sh@931 -- # uname 00:19:21.829 21:14:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:21.829 21:14:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2387842 00:19:21.829 21:14:59 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:21.829 21:14:59 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:21.829 21:14:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2387842' 00:19:21.829 killing process with pid 2387842 00:19:21.829 21:14:59 -- common/autotest_common.sh@945 -- # kill 2387842 00:19:21.829 21:14:59 -- common/autotest_common.sh@950 -- # wait 2387842 00:19:22.127 21:14:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:22.127 21:14:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:22.127 21:14:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:22.127 21:14:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.127 21:14:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:22.127 21:14:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.127 21:14:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.127 21:14:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.082 21:15:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:24.082 00:19:24.082 real 0m11.609s 00:19:24.082 user 0m12.534s 00:19:24.082 sys 0m5.807s 00:19:24.082 21:15:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.082 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:19:24.082 ************************************ 00:19:24.082 END TEST nvmf_bdevio 00:19:24.082 ************************************ 00:19:24.082 21:15:02 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:24.082 21:15:02 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:24.082 21:15:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:24.082 21:15:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:24.082 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:19:24.082 ************************************ 00:19:24.082 START TEST nvmf_bdevio_no_huge 00:19:24.082 ************************************ 00:19:24.082 21:15:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:24.344 * Looking for test storage... 00:19:24.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.344 21:15:02 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.344 21:15:02 -- nvmf/common.sh@7 -- # uname -s 00:19:24.344 21:15:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.344 21:15:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.344 21:15:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.344 21:15:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.344 21:15:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.344 21:15:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.344 21:15:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.344 21:15:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.344 21:15:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.344 21:15:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.344 21:15:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.344 21:15:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.344 21:15:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.344 21:15:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.344 21:15:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.345 21:15:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.345 21:15:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.345 21:15:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.345 21:15:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.345 21:15:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.345 21:15:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.345 21:15:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.345 21:15:02 -- paths/export.sh@5 -- # export PATH 00:19:24.345 21:15:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.345 21:15:02 -- nvmf/common.sh@46 -- # : 0 00:19:24.345 21:15:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:24.345 21:15:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:24.345 21:15:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:24.345 21:15:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.345 21:15:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.345 21:15:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:24.345 21:15:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:24.345 21:15:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:24.345 21:15:02 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:24.345 21:15:02 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:24.345 21:15:02 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:24.345 21:15:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:24.345 21:15:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.345 21:15:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:24.345 21:15:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:24.345 21:15:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:24.345 21:15:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.345 21:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.345 21:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.345 21:15:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:24.345 21:15:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:24.345 21:15:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:24.345 21:15:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.936 21:15:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.936 21:15:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:30.936 21:15:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:30.936 21:15:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:30.936 21:15:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:30.936 21:15:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:30.936 21:15:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:30.936 21:15:08 -- nvmf/common.sh@294 -- # net_devs=() 00:19:30.936 21:15:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:30.936 21:15:08 -- nvmf/common.sh@295 -- # e810=() 00:19:30.936 21:15:08 -- nvmf/common.sh@295 -- # local -ga e810 00:19:30.936 21:15:08 -- nvmf/common.sh@296 -- # x722=() 00:19:30.936 21:15:08 -- nvmf/common.sh@296 -- # local -ga x722 00:19:30.936 21:15:08 -- nvmf/common.sh@297 -- # mlx=() 00:19:30.936 21:15:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:30.936 21:15:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.936 21:15:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:30.936 21:15:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:30.936 21:15:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:30.936 21:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.936 21:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:30.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:30.936 21:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.936 21:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:30.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:30.936 21:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:30.936 21:15:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:30.936 21:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.936 21:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.936 21:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.936 21:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.936 21:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:30.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:30.936 21:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.936 21:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.936 21:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.936 21:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.936 21:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.936 21:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:30.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:30.937 21:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.937 21:15:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:30.937 21:15:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:30.937 21:15:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:30.937 21:15:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:30.937 21:15:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:30.937 21:15:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.937 21:15:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.937 21:15:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.937 21:15:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:30.937 21:15:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.937 21:15:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.937 21:15:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:30.937 21:15:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.937 21:15:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.937 21:15:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:30.937 21:15:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:30.937 21:15:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.937 21:15:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.198 21:15:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.198 21:15:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.198 21:15:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:31.198 21:15:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.460 21:15:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.460 21:15:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.460 21:15:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:31.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:19:31.460 00:19:31.460 --- 10.0.0.2 ping statistics --- 00:19:31.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.460 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:19:31.460 21:15:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:19:31.460 00:19:31.460 --- 10.0.0.1 ping statistics --- 00:19:31.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.460 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:19:31.460 21:15:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.460 21:15:09 -- nvmf/common.sh@410 -- # return 0 00:19:31.460 21:15:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:31.460 21:15:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.460 21:15:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:31.460 21:15:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:31.460 21:15:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.460 21:15:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:31.460 21:15:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:31.460 21:15:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:31.460 21:15:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:31.460 21:15:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:31.460 21:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:31.460 21:15:09 -- nvmf/common.sh@469 -- # nvmfpid=2392854 00:19:31.460 21:15:09 -- nvmf/common.sh@470 -- # waitforlisten 2392854 00:19:31.460 21:15:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:31.460 21:15:09 -- common/autotest_common.sh@819 -- # '[' -z 2392854 ']' 00:19:31.460 21:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.460 21:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:31.460 21:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.460 21:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:31.460 21:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:31.460 [2024-06-08 21:15:09.460668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:31.460 [2024-06-08 21:15:09.460773] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:31.722 [2024-06-08 21:15:09.558933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.722 [2024-06-08 21:15:09.662874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.722 [2024-06-08 21:15:09.663025] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.722 [2024-06-08 21:15:09.663035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.722 [2024-06-08 21:15:09.663042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.722 [2024-06-08 21:15:09.663208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:31.722 [2024-06-08 21:15:09.663267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:31.722 [2024-06-08 21:15:09.663391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.722 [2024-06-08 21:15:09.663392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:32.295 21:15:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:32.295 21:15:10 -- common/autotest_common.sh@852 -- # return 0 00:19:32.295 21:15:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:32.295 21:15:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 21:15:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.295 21:15:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.295 21:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 [2024-06-08 21:15:10.285690] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.295 21:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.295 21:15:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:32.295 21:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 Malloc0 00:19:32.295 21:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.295 21:15:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:32.295 21:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 21:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.295 21:15:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:32.295 21:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 21:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.295 21:15:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.295 21:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.295 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:32.295 [2024-06-08 21:15:10.323441] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.295 21:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.295 21:15:10 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:32.295 21:15:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:32.295 21:15:10 -- nvmf/common.sh@520 -- # config=() 00:19:32.295 21:15:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:32.295 21:15:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:32.295 21:15:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:32.295 { 00:19:32.295 "params": { 00:19:32.295 "name": "Nvme$subsystem", 00:19:32.295 "trtype": "$TEST_TRANSPORT", 00:19:32.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.295 "adrfam": "ipv4", 00:19:32.295 "trsvcid": "$NVMF_PORT", 00:19:32.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.295 "hdgst": ${hdgst:-false}, 00:19:32.295 "ddgst": ${ddgst:-false} 00:19:32.295 }, 00:19:32.295 "method": "bdev_nvme_attach_controller" 00:19:32.295 } 00:19:32.295 EOF 00:19:32.295 )") 00:19:32.295 21:15:10 -- nvmf/common.sh@542 -- # cat 00:19:32.295 21:15:10 -- nvmf/common.sh@544 -- # jq . 00:19:32.295 21:15:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:32.295 21:15:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:32.295 "params": { 00:19:32.295 "name": "Nvme1", 00:19:32.295 "trtype": "tcp", 00:19:32.296 "traddr": "10.0.0.2", 00:19:32.296 "adrfam": "ipv4", 00:19:32.296 "trsvcid": "4420", 00:19:32.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.296 "hdgst": false, 00:19:32.296 "ddgst": false 00:19:32.296 }, 00:19:32.296 "method": "bdev_nvme_attach_controller" 00:19:32.296 }' 00:19:32.296 [2024-06-08 21:15:10.375768] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:32.296 [2024-06-08 21:15:10.375838] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2393176 ] 00:19:32.557 [2024-06-08 21:15:10.443566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:32.557 [2024-06-08 21:15:10.538317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.557 [2024-06-08 21:15:10.538458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.557 [2024-06-08 21:15:10.538623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.818 [2024-06-08 21:15:10.799100] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:32.818 [2024-06-08 21:15:10.799126] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:32.818 I/O targets: 00:19:32.818 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:32.818 00:19:32.818 00:19:32.818 CUnit - A unit testing framework for C - Version 2.1-3 00:19:32.818 http://cunit.sourceforge.net/ 00:19:32.818 00:19:32.818 00:19:32.818 Suite: bdevio tests on: Nvme1n1 00:19:32.818 Test: blockdev write read block ...passed 00:19:32.818 Test: blockdev write zeroes read block ...passed 00:19:32.818 Test: blockdev write zeroes read no split ...passed 00:19:33.079 Test: blockdev write zeroes read split ...passed 00:19:33.079 Test: blockdev write zeroes read split partial ...passed 00:19:33.079 Test: blockdev reset ...[2024-06-08 21:15:11.039807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.079 [2024-06-08 21:15:11.039869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa7b60 (9): Bad file descriptor 00:19:33.079 [2024-06-08 21:15:11.068905] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.079 passed 00:19:33.079 Test: blockdev write read 8 blocks ...passed 00:19:33.079 Test: blockdev write read size > 128k ...passed 00:19:33.079 Test: blockdev write read invalid size ...passed 00:19:33.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:33.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:33.079 Test: blockdev write read max offset ...passed 00:19:33.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:33.340 Test: blockdev writev readv 8 blocks ...passed 00:19:33.340 Test: blockdev writev readv 30 x 1block ...passed 00:19:33.340 Test: blockdev writev readv block ...passed 00:19:33.340 Test: blockdev writev readv size > 128k ...passed 00:19:33.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:33.340 Test: blockdev comparev and writev ...[2024-06-08 21:15:11.298183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.298206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.298217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.298223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.298716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.298725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.298734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.298739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.299230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.299238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.299247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.299252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.299739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.299747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.299756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:33.340 [2024-06-08 21:15:11.299762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:33.340 passed 00:19:33.340 Test: blockdev nvme passthru rw ...passed 00:19:33.340 Test: blockdev nvme passthru vendor specific ...[2024-06-08 21:15:11.383217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.340 [2024-06-08 21:15:11.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.383576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.340 [2024-06-08 21:15:11.383584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.383920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.340 [2024-06-08 21:15:11.383927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:33.340 [2024-06-08 21:15:11.384314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:33.340 [2024-06-08 21:15:11.384321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:33.340 passed 00:19:33.340 Test: blockdev nvme admin passthru ...passed 00:19:33.601 Test: blockdev copy ...passed 00:19:33.601 00:19:33.601 Run Summary: Type Total Ran Passed Failed Inactive 00:19:33.601 suites 1 1 n/a 0 0 00:19:33.601 tests 23 23 23 0 0 00:19:33.601 asserts 152 152 152 0 n/a 00:19:33.601 00:19:33.601 Elapsed time = 1.297 seconds 00:19:33.862 21:15:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.862 21:15:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.862 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:33.862 21:15:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.862 21:15:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:33.862 21:15:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:33.862 21:15:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:33.862 21:15:11 -- nvmf/common.sh@116 -- # sync 00:19:33.862 21:15:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:33.862 21:15:11 -- nvmf/common.sh@119 -- # set +e 00:19:33.862 21:15:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:33.862 21:15:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:33.862 rmmod nvme_tcp 00:19:33.862 rmmod nvme_fabrics 00:19:33.862 rmmod nvme_keyring 00:19:33.862 21:15:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:33.862 21:15:11 -- nvmf/common.sh@123 -- # set -e 00:19:33.862 21:15:11 -- nvmf/common.sh@124 -- # return 0 00:19:33.862 21:15:11 -- nvmf/common.sh@477 -- # '[' -n 2392854 ']' 00:19:33.862 21:15:11 -- nvmf/common.sh@478 -- # killprocess 2392854 00:19:33.862 21:15:11 -- common/autotest_common.sh@926 -- # '[' -z 2392854 ']' 00:19:33.862 21:15:11 -- common/autotest_common.sh@930 -- # kill -0 2392854 00:19:33.862 21:15:11 -- common/autotest_common.sh@931 -- # uname 00:19:33.862 21:15:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:33.862 21:15:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2392854 00:19:33.862 21:15:11 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:33.862 21:15:11 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:33.862 21:15:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2392854' 00:19:33.862 killing process with pid 2392854 00:19:33.862 21:15:11 -- common/autotest_common.sh@945 -- # kill 2392854 00:19:33.862 21:15:11 -- common/autotest_common.sh@950 -- # wait 2392854 00:19:34.123 21:15:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:34.123 21:15:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:34.123 21:15:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:34.123 21:15:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.123 21:15:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:34.123 21:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.123 21:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.123 21:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.671 21:15:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:36.671 00:19:36.671 real 0m12.110s 00:19:36.671 user 0m14.289s 00:19:36.671 sys 0m6.203s 00:19:36.671 21:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.671 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:19:36.671 ************************************ 00:19:36.671 END TEST nvmf_bdevio_no_huge 00:19:36.671 ************************************ 00:19:36.671 21:15:14 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:36.671 21:15:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:36.671 21:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.671 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:19:36.671 ************************************ 00:19:36.671 START TEST nvmf_tls 00:19:36.671 ************************************ 00:19:36.671 21:15:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:36.671 * Looking for test storage... 00:19:36.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.671 21:15:14 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.671 21:15:14 -- nvmf/common.sh@7 -- # uname -s 00:19:36.671 21:15:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.671 21:15:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.671 21:15:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.671 21:15:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.671 21:15:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.671 21:15:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.671 21:15:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.671 21:15:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.671 21:15:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.671 21:15:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.671 21:15:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.671 21:15:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.672 21:15:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.672 21:15:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.672 21:15:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.672 21:15:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.672 21:15:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.672 21:15:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.672 21:15:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.672 21:15:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.672 21:15:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.672 21:15:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.672 21:15:14 -- paths/export.sh@5 -- # export PATH 00:19:36.672 21:15:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.672 21:15:14 -- nvmf/common.sh@46 -- # : 0 00:19:36.672 21:15:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:36.672 21:15:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:36.672 21:15:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:36.672 21:15:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.672 21:15:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.672 21:15:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:36.672 21:15:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:36.672 21:15:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:36.672 21:15:14 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:36.672 21:15:14 -- target/tls.sh@71 -- # nvmftestinit 00:19:36.672 21:15:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:36.672 21:15:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.672 21:15:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:36.672 21:15:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:36.672 21:15:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:36.672 21:15:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.672 21:15:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.672 21:15:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.672 21:15:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:36.672 21:15:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:36.672 21:15:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:36.672 21:15:14 -- common/autotest_common.sh@10 -- # set +x 00:19:43.256 21:15:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.256 21:15:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.256 21:15:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.256 21:15:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.256 21:15:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.256 21:15:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.256 21:15:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.256 21:15:21 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.256 21:15:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.256 21:15:21 -- nvmf/common.sh@295 -- # e810=() 00:19:43.256 21:15:21 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.256 21:15:21 -- nvmf/common.sh@296 -- # x722=() 00:19:43.256 21:15:21 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.256 21:15:21 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.256 21:15:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.256 21:15:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.256 21:15:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.256 21:15:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:43.256 21:15:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.256 21:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:43.256 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:43.256 21:15:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.256 21:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:43.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:43.256 21:15:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.256 21:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.256 21:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.256 21:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:43.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:43.256 21:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.256 21:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.256 21:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.256 21:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.256 21:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:43.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:43.256 21:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.256 21:15:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.256 21:15:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:43.256 21:15:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:43.256 21:15:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.256 21:15:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.256 21:15:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.256 21:15:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:43.256 21:15:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.256 21:15:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.256 21:15:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:43.256 21:15:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.256 21:15:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.256 21:15:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:43.256 21:15:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:43.256 21:15:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.256 21:15:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.256 21:15:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.256 21:15:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.256 21:15:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:43.256 21:15:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.517 21:15:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.517 21:15:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.517 21:15:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:43.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:19:43.517 00:19:43.517 --- 10.0.0.2 ping statistics --- 00:19:43.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.517 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:19:43.517 21:15:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:19:43.517 00:19:43.517 --- 10.0.0.1 ping statistics --- 00:19:43.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.517 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:19:43.517 21:15:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.517 21:15:21 -- nvmf/common.sh@410 -- # return 0 00:19:43.517 21:15:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.517 21:15:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.517 21:15:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.517 21:15:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.517 21:15:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.517 21:15:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.517 21:15:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.517 21:15:21 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:43.517 21:15:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.517 21:15:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.517 21:15:21 -- common/autotest_common.sh@10 -- # set +x 00:19:43.517 21:15:21 -- nvmf/common.sh@469 -- # nvmfpid=2397796 00:19:43.517 21:15:21 -- nvmf/common.sh@470 -- # waitforlisten 2397796 00:19:43.517 21:15:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:43.517 21:15:21 -- common/autotest_common.sh@819 -- # '[' -z 2397796 ']' 00:19:43.517 21:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.517 21:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.517 21:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.517 21:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.517 21:15:21 -- common/autotest_common.sh@10 -- # set +x 00:19:43.517 [2024-06-08 21:15:21.497567] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:43.517 [2024-06-08 21:15:21.497646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.517 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.517 [2024-06-08 21:15:21.590538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.777 [2024-06-08 21:15:21.680922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.777 [2024-06-08 21:15:21.681069] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.777 [2024-06-08 21:15:21.681078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.777 [2024-06-08 21:15:21.681085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.777 [2024-06-08 21:15:21.681109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.348 21:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.348 21:15:22 -- common/autotest_common.sh@852 -- # return 0 00:19:44.348 21:15:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.348 21:15:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.348 21:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:44.348 21:15:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.348 21:15:22 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:44.348 21:15:22 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:44.609 true 00:19:44.609 21:15:22 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:44.609 21:15:22 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:44.609 21:15:22 -- target/tls.sh@82 -- # version=0 00:19:44.609 21:15:22 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:44.609 21:15:22 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:44.870 21:15:22 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:44.870 21:15:22 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:45.131 21:15:22 -- target/tls.sh@90 -- # version=13 00:19:45.131 21:15:22 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:45.131 21:15:22 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:45.131 21:15:23 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:45.131 21:15:23 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:45.391 21:15:23 -- target/tls.sh@98 -- # version=7 00:19:45.391 21:15:23 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:45.391 21:15:23 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:45.391 21:15:23 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:45.391 21:15:23 -- target/tls.sh@105 -- # ktls=false 00:19:45.391 21:15:23 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:45.391 21:15:23 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:45.686 21:15:23 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:45.686 21:15:23 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:45.950 21:15:23 -- target/tls.sh@113 -- # ktls=true 00:19:45.950 21:15:23 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:45.950 21:15:23 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:45.950 21:15:23 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:45.950 21:15:23 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:46.211 21:15:24 -- target/tls.sh@121 -- # ktls=false 00:19:46.211 21:15:24 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:46.211 21:15:24 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:46.211 21:15:24 -- target/tls.sh@49 -- # local key hash crc 00:19:46.211 21:15:24 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:46.211 21:15:24 -- target/tls.sh@51 -- # hash=01 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # gzip -1 -c 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # tail -c8 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # head -c 4 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # crc='p$H�' 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:46.211 21:15:24 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:46.211 21:15:24 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:46.211 21:15:24 -- target/tls.sh@49 -- # local key hash crc 00:19:46.211 21:15:24 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:46.211 21:15:24 -- target/tls.sh@51 -- # hash=01 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # gzip -1 -c 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # tail -c8 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # head -c 4 00:19:46.211 21:15:24 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:46.211 21:15:24 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:46.211 21:15:24 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:46.211 21:15:24 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:46.211 21:15:24 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:46.211 21:15:24 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:46.211 21:15:24 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:46.211 21:15:24 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:46.211 21:15:24 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:46.211 21:15:24 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:46.211 21:15:24 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:46.471 21:15:24 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:46.471 21:15:24 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:46.471 21:15:24 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.732 [2024-06-08 21:15:24.612355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.732 21:15:24 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.732 21:15:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.992 [2024-06-08 21:15:24.897059] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.992 [2024-06-08 21:15:24.897235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.992 21:15:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.992 malloc0 00:19:46.992 21:15:25 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.252 21:15:25 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:47.512 21:15:25 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:47.512 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.510 Initializing NVMe Controllers 00:19:57.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:57.510 Initialization complete. Launching workers. 00:19:57.510 ======================================================== 00:19:57.510 Latency(us) 00:19:57.510 Device Information : IOPS MiB/s Average min max 00:19:57.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19664.34 76.81 3254.58 1112.69 6485.46 00:19:57.510 ======================================================== 00:19:57.510 Total : 19664.34 76.81 3254.58 1112.69 6485.46 00:19:57.510 00:19:57.510 21:15:35 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:57.510 21:15:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.510 21:15:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.510 21:15:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.510 21:15:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:57.510 21:15:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.510 21:15:35 -- target/tls.sh@28 -- # bdevperf_pid=2400571 00:19:57.510 21:15:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.510 21:15:35 -- target/tls.sh@31 -- # waitforlisten 2400571 /var/tmp/bdevperf.sock 00:19:57.510 21:15:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.510 21:15:35 -- common/autotest_common.sh@819 -- # '[' -z 2400571 ']' 00:19:57.510 21:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.510 21:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:57.510 21:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.510 21:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:57.510 21:15:35 -- common/autotest_common.sh@10 -- # set +x 00:19:57.510 [2024-06-08 21:15:35.503634] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:57.510 [2024-06-08 21:15:35.503691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400571 ] 00:19:57.510 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.510 [2024-06-08 21:15:35.553348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.771 [2024-06-08 21:15:35.604229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.341 21:15:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:58.341 21:15:36 -- common/autotest_common.sh@852 -- # return 0 00:19:58.341 21:15:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:58.341 [2024-06-08 21:15:36.392668] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.603 TLSTESTn1 00:19:58.603 21:15:36 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:58.603 Running I/O for 10 seconds... 00:20:08.603 00:20:08.603 Latency(us) 00:20:08.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.603 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:08.603 Verification LBA range: start 0x0 length 0x2000 00:20:08.603 TLSTESTn1 : 10.06 1702.03 6.65 0.00 0.00 75044.00 8410.45 83449.17 00:20:08.603 =================================================================================================================== 00:20:08.603 Total : 1702.03 6.65 0.00 0.00 75044.00 8410.45 83449.17 00:20:08.603 0 00:20:08.603 21:15:46 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.603 21:15:46 -- target/tls.sh@45 -- # killprocess 2400571 00:20:08.603 21:15:46 -- common/autotest_common.sh@926 -- # '[' -z 2400571 ']' 00:20:08.603 21:15:46 -- common/autotest_common.sh@930 -- # kill -0 2400571 00:20:08.603 21:15:46 -- common/autotest_common.sh@931 -- # uname 00:20:08.603 21:15:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.603 21:15:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2400571 00:20:08.885 21:15:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:08.885 21:15:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:08.885 21:15:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2400571' 00:20:08.885 killing process with pid 2400571 00:20:08.885 21:15:46 -- common/autotest_common.sh@945 -- # kill 2400571 00:20:08.885 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.885 00:20:08.885 Latency(us) 00:20:08.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.885 =================================================================================================================== 00:20:08.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.885 21:15:46 -- common/autotest_common.sh@950 -- # wait 2400571 00:20:08.885 21:15:46 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:08.885 21:15:46 -- common/autotest_common.sh@640 -- # local es=0 00:20:08.885 21:15:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:08.885 21:15:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:08.885 21:15:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:08.885 21:15:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:08.885 21:15:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:08.885 21:15:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:08.885 21:15:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:08.885 21:15:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:08.885 21:15:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:08.885 21:15:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:08.885 21:15:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:08.885 21:15:46 -- target/tls.sh@28 -- # bdevperf_pid=2402707 00:20:08.885 21:15:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.885 21:15:46 -- target/tls.sh@31 -- # waitforlisten 2402707 /var/tmp/bdevperf.sock 00:20:08.885 21:15:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:08.885 21:15:46 -- common/autotest_common.sh@819 -- # '[' -z 2402707 ']' 00:20:08.885 21:15:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.885 21:15:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:08.885 21:15:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.885 21:15:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:08.885 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:20:08.885 [2024-06-08 21:15:46.901453] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:08.885 [2024-06-08 21:15:46.901508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402707 ] 00:20:08.886 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.886 [2024-06-08 21:15:46.951292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.146 [2024-06-08 21:15:47.002711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.718 21:15:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.718 21:15:47 -- common/autotest_common.sh@852 -- # return 0 00:20:09.718 21:15:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:09.718 [2024-06-08 21:15:47.807396] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.980 [2024-06-08 21:15:47.811978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:09.980 [2024-06-08 21:15:47.812595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2327e10 (107): Transport endpoint is not connected 00:20:09.980 [2024-06-08 21:15:47.813590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2327e10 (9): Bad file descriptor 00:20:09.980 [2024-06-08 21:15:47.814592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.980 [2024-06-08 21:15:47.814598] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:09.980 [2024-06-08 21:15:47.814604] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.980 request: 00:20:09.980 { 00:20:09.980 "name": "TLSTEST", 00:20:09.980 "trtype": "tcp", 00:20:09.980 "traddr": "10.0.0.2", 00:20:09.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.980 "adrfam": "ipv4", 00:20:09.980 "trsvcid": "4420", 00:20:09.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.980 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:09.980 "method": "bdev_nvme_attach_controller", 00:20:09.980 "req_id": 1 00:20:09.980 } 00:20:09.980 Got JSON-RPC error response 00:20:09.980 response: 00:20:09.980 { 00:20:09.980 "code": -32602, 00:20:09.980 "message": "Invalid parameters" 00:20:09.980 } 00:20:09.980 21:15:47 -- target/tls.sh@36 -- # killprocess 2402707 00:20:09.980 21:15:47 -- common/autotest_common.sh@926 -- # '[' -z 2402707 ']' 00:20:09.980 21:15:47 -- common/autotest_common.sh@930 -- # kill -0 2402707 00:20:09.980 21:15:47 -- common/autotest_common.sh@931 -- # uname 00:20:09.980 21:15:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.980 21:15:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2402707 00:20:09.980 21:15:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:09.980 21:15:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:09.980 21:15:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2402707' 00:20:09.980 killing process with pid 2402707 00:20:09.980 21:15:47 -- common/autotest_common.sh@945 -- # kill 2402707 00:20:09.980 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.980 00:20:09.980 Latency(us) 00:20:09.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.980 =================================================================================================================== 00:20:09.980 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.980 21:15:47 -- common/autotest_common.sh@950 -- # wait 2402707 00:20:09.980 21:15:47 -- target/tls.sh@37 -- # return 1 00:20:09.980 21:15:47 -- common/autotest_common.sh@643 -- # es=1 00:20:09.980 21:15:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:09.980 21:15:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:09.980 21:15:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:09.980 21:15:47 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.980 21:15:47 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.980 21:15:47 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.980 21:15:47 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:09.980 21:15:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.980 21:15:47 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:09.980 21:15:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.980 21:15:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:09.980 21:15:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.980 21:15:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.980 21:15:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:09.980 21:15:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:09.980 21:15:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.980 21:15:48 -- target/tls.sh@28 -- # bdevperf_pid=2402960 00:20:09.980 21:15:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.980 21:15:48 -- target/tls.sh@31 -- # waitforlisten 2402960 /var/tmp/bdevperf.sock 00:20:09.980 21:15:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.980 21:15:48 -- common/autotest_common.sh@819 -- # '[' -z 2402960 ']' 00:20:09.980 21:15:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.980 21:15:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.980 21:15:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.980 21:15:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.980 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:20:09.980 [2024-06-08 21:15:48.045905] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:09.980 [2024-06-08 21:15:48.045959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402960 ] 00:20:09.980 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.241 [2024-06-08 21:15:48.095660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.241 [2024-06-08 21:15:48.144827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.813 21:15:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.813 21:15:48 -- common/autotest_common.sh@852 -- # return 0 00:20:10.813 21:15:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.074 [2024-06-08 21:15:48.953558] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.074 [2024-06-08 21:15:48.963955] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:11.074 [2024-06-08 21:15:48.963973] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:11.074 [2024-06-08 21:15:48.963993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:11.074 [2024-06-08 21:15:48.964744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113ee10 (107): Transport endpoint is not connected 00:20:11.074 [2024-06-08 21:15:48.965739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113ee10 (9): Bad file descriptor 00:20:11.074 [2024-06-08 21:15:48.966740] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.074 [2024-06-08 21:15:48.966747] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:11.074 [2024-06-08 21:15:48.966754] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.074 request: 00:20:11.074 { 00:20:11.074 "name": "TLSTEST", 00:20:11.074 "trtype": "tcp", 00:20:11.074 "traddr": "10.0.0.2", 00:20:11.074 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.074 "adrfam": "ipv4", 00:20:11.074 "trsvcid": "4420", 00:20:11.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.074 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:11.074 "method": "bdev_nvme_attach_controller", 00:20:11.074 "req_id": 1 00:20:11.074 } 00:20:11.074 Got JSON-RPC error response 00:20:11.074 response: 00:20:11.074 { 00:20:11.074 "code": -32602, 00:20:11.074 "message": "Invalid parameters" 00:20:11.074 } 00:20:11.074 21:15:48 -- target/tls.sh@36 -- # killprocess 2402960 00:20:11.074 21:15:48 -- common/autotest_common.sh@926 -- # '[' -z 2402960 ']' 00:20:11.074 21:15:48 -- common/autotest_common.sh@930 -- # kill -0 2402960 00:20:11.074 21:15:48 -- common/autotest_common.sh@931 -- # uname 00:20:11.074 21:15:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:11.074 21:15:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2402960 00:20:11.074 21:15:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:11.074 21:15:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:11.074 21:15:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2402960' 00:20:11.074 killing process with pid 2402960 00:20:11.074 21:15:49 -- common/autotest_common.sh@945 -- # kill 2402960 00:20:11.074 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.074 00:20:11.074 Latency(us) 00:20:11.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.074 =================================================================================================================== 00:20:11.074 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.074 21:15:49 -- common/autotest_common.sh@950 -- # wait 2402960 00:20:11.074 21:15:49 -- target/tls.sh@37 -- # return 1 00:20:11.074 21:15:49 -- common/autotest_common.sh@643 -- # es=1 00:20:11.074 21:15:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:11.074 21:15:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:11.074 21:15:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:11.074 21:15:49 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.074 21:15:49 -- common/autotest_common.sh@640 -- # local es=0 00:20:11.074 21:15:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.074 21:15:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:11.074 21:15:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:11.074 21:15:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:11.074 21:15:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:11.074 21:15:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:11.074 21:15:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.074 21:15:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:11.074 21:15:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.074 21:15:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:11.075 21:15:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.075 21:15:49 -- target/tls.sh@28 -- # bdevperf_pid=2403303 00:20:11.075 21:15:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.075 21:15:49 -- target/tls.sh@31 -- # waitforlisten 2403303 /var/tmp/bdevperf.sock 00:20:11.075 21:15:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.075 21:15:49 -- common/autotest_common.sh@819 -- # '[' -z 2403303 ']' 00:20:11.075 21:15:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.075 21:15:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.075 21:15:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.075 21:15:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.075 21:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:11.335 [2024-06-08 21:15:49.205464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:11.336 [2024-06-08 21:15:49.205518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403303 ] 00:20:11.336 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.336 [2024-06-08 21:15:49.255267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.336 [2024-06-08 21:15:49.305968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.275 21:15:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.275 21:15:50 -- common/autotest_common.sh@852 -- # return 0 00:20:12.275 21:15:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:12.275 [2024-06-08 21:15:50.187042] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.275 [2024-06-08 21:15:50.196965] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:12.275 [2024-06-08 21:15:50.196986] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:12.275 [2024-06-08 21:15:50.197005] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.275 [2024-06-08 21:15:50.198232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae10 (107): Transport endpoint is not connected 00:20:12.275 [2024-06-08 21:15:50.199226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae10 (9): Bad file descriptor 00:20:12.275 [2024-06-08 21:15:50.200228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:12.275 [2024-06-08 21:15:50.200234] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:12.275 [2024-06-08 21:15:50.200241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:12.275 request: 00:20:12.275 { 00:20:12.275 "name": "TLSTEST", 00:20:12.275 "trtype": "tcp", 00:20:12.275 "traddr": "10.0.0.2", 00:20:12.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.275 "adrfam": "ipv4", 00:20:12.275 "trsvcid": "4420", 00:20:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:12.275 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:12.275 "method": "bdev_nvme_attach_controller", 00:20:12.275 "req_id": 1 00:20:12.275 } 00:20:12.275 Got JSON-RPC error response 00:20:12.275 response: 00:20:12.275 { 00:20:12.275 "code": -32602, 00:20:12.275 "message": "Invalid parameters" 00:20:12.275 } 00:20:12.275 21:15:50 -- target/tls.sh@36 -- # killprocess 2403303 00:20:12.275 21:15:50 -- common/autotest_common.sh@926 -- # '[' -z 2403303 ']' 00:20:12.275 21:15:50 -- common/autotest_common.sh@930 -- # kill -0 2403303 00:20:12.275 21:15:50 -- common/autotest_common.sh@931 -- # uname 00:20:12.275 21:15:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:12.275 21:15:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2403303 00:20:12.275 21:15:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:12.275 21:15:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:12.275 21:15:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2403303' 00:20:12.275 killing process with pid 2403303 00:20:12.275 21:15:50 -- common/autotest_common.sh@945 -- # kill 2403303 00:20:12.276 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.276 00:20:12.276 Latency(us) 00:20:12.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.276 =================================================================================================================== 00:20:12.276 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.276 21:15:50 -- common/autotest_common.sh@950 -- # wait 2403303 00:20:12.537 21:15:50 -- target/tls.sh@37 -- # return 1 00:20:12.537 21:15:50 -- common/autotest_common.sh@643 -- # es=1 00:20:12.537 21:15:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:12.537 21:15:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:12.537 21:15:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:12.537 21:15:50 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:12.537 21:15:50 -- common/autotest_common.sh@640 -- # local es=0 00:20:12.537 21:15:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:12.537 21:15:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:12.537 21:15:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:12.537 21:15:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:12.537 21:15:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:12.537 21:15:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:12.537 21:15:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.537 21:15:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.537 21:15:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.537 21:15:50 -- target/tls.sh@23 -- # psk= 00:20:12.537 21:15:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.537 21:15:50 -- target/tls.sh@28 -- # bdevperf_pid=2403446 00:20:12.537 21:15:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.537 21:15:50 -- target/tls.sh@31 -- # waitforlisten 2403446 /var/tmp/bdevperf.sock 00:20:12.537 21:15:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.537 21:15:50 -- common/autotest_common.sh@819 -- # '[' -z 2403446 ']' 00:20:12.537 21:15:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.537 21:15:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:12.537 21:15:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.537 21:15:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:12.537 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:20:12.537 [2024-06-08 21:15:50.423606] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:12.537 [2024-06-08 21:15:50.423662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403446 ] 00:20:12.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.537 [2024-06-08 21:15:50.473546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.537 [2024-06-08 21:15:50.525153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.108 21:15:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:13.108 21:15:51 -- common/autotest_common.sh@852 -- # return 0 00:20:13.108 21:15:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:13.369 [2024-06-08 21:15:51.320499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:13.369 [2024-06-08 21:15:51.322362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643890 (9): Bad file descriptor 00:20:13.369 [2024-06-08 21:15:51.323361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.369 [2024-06-08 21:15:51.323368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:13.369 [2024-06-08 21:15:51.323374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.369 request: 00:20:13.369 { 00:20:13.369 "name": "TLSTEST", 00:20:13.369 "trtype": "tcp", 00:20:13.369 "traddr": "10.0.0.2", 00:20:13.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.369 "adrfam": "ipv4", 00:20:13.369 "trsvcid": "4420", 00:20:13.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.369 "method": "bdev_nvme_attach_controller", 00:20:13.369 "req_id": 1 00:20:13.369 } 00:20:13.369 Got JSON-RPC error response 00:20:13.369 response: 00:20:13.369 { 00:20:13.369 "code": -32602, 00:20:13.369 "message": "Invalid parameters" 00:20:13.369 } 00:20:13.369 21:15:51 -- target/tls.sh@36 -- # killprocess 2403446 00:20:13.369 21:15:51 -- common/autotest_common.sh@926 -- # '[' -z 2403446 ']' 00:20:13.369 21:15:51 -- common/autotest_common.sh@930 -- # kill -0 2403446 00:20:13.369 21:15:51 -- common/autotest_common.sh@931 -- # uname 00:20:13.369 21:15:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.369 21:15:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2403446 00:20:13.369 21:15:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:13.369 21:15:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:13.369 21:15:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2403446' 00:20:13.369 killing process with pid 2403446 00:20:13.369 21:15:51 -- common/autotest_common.sh@945 -- # kill 2403446 00:20:13.369 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.369 00:20:13.369 Latency(us) 00:20:13.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.369 =================================================================================================================== 00:20:13.369 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.369 21:15:51 -- common/autotest_common.sh@950 -- # wait 2403446 00:20:13.630 21:15:51 -- target/tls.sh@37 -- # return 1 00:20:13.630 21:15:51 -- common/autotest_common.sh@643 -- # es=1 00:20:13.630 21:15:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.630 21:15:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.630 21:15:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.630 21:15:51 -- target/tls.sh@167 -- # killprocess 2397796 00:20:13.630 21:15:51 -- common/autotest_common.sh@926 -- # '[' -z 2397796 ']' 00:20:13.630 21:15:51 -- common/autotest_common.sh@930 -- # kill -0 2397796 00:20:13.630 21:15:51 -- common/autotest_common.sh@931 -- # uname 00:20:13.630 21:15:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.630 21:15:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2397796 00:20:13.630 21:15:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:13.630 21:15:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:13.630 21:15:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2397796' 00:20:13.630 killing process with pid 2397796 00:20:13.630 21:15:51 -- common/autotest_common.sh@945 -- # kill 2397796 00:20:13.630 21:15:51 -- common/autotest_common.sh@950 -- # wait 2397796 00:20:13.630 21:15:51 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:13.630 21:15:51 -- target/tls.sh@49 -- # local key hash crc 00:20:13.630 21:15:51 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:13.630 21:15:51 -- target/tls.sh@51 -- # hash=02 00:20:13.630 21:15:51 -- target/tls.sh@52 -- # gzip -1 -c 00:20:13.630 21:15:51 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:13.630 21:15:51 -- target/tls.sh@52 -- # head -c 4 00:20:13.630 21:15:51 -- target/tls.sh@52 -- # tail -c8 00:20:13.630 21:15:51 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:13.630 21:15:51 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:13.630 21:15:51 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:13.630 21:15:51 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:13.630 21:15:51 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:13.630 21:15:51 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.630 21:15:51 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:13.630 21:15:51 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.630 21:15:51 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:13.630 21:15:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.630 21:15:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:13.630 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:13.630 21:15:51 -- nvmf/common.sh@469 -- # nvmfpid=2403687 00:20:13.630 21:15:51 -- nvmf/common.sh@470 -- # waitforlisten 2403687 00:20:13.630 21:15:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.630 21:15:51 -- common/autotest_common.sh@819 -- # '[' -z 2403687 ']' 00:20:13.630 21:15:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.630 21:15:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.631 21:15:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.631 21:15:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.631 21:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:13.926 [2024-06-08 21:15:51.753948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:13.926 [2024-06-08 21:15:51.754001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.926 [2024-06-08 21:15:51.835878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.926 [2024-06-08 21:15:51.888477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.926 [2024-06-08 21:15:51.888563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.926 [2024-06-08 21:15:51.888569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.926 [2024-06-08 21:15:51.888574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.926 [2024-06-08 21:15:51.888587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.498 21:15:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.498 21:15:52 -- common/autotest_common.sh@852 -- # return 0 00:20:14.498 21:15:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.498 21:15:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:14.498 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:20:14.498 21:15:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.498 21:15:52 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:14.498 21:15:52 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:14.498 21:15:52 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.759 [2024-06-08 21:15:52.678438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.759 21:15:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.759 21:15:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:15.020 [2024-06-08 21:15:52.951099] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.020 [2024-06-08 21:15:52.951265] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.020 21:15:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:15.020 malloc0 00:20:15.020 21:15:53 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.281 21:15:53 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:15.541 21:15:53 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:15.541 21:15:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.541 21:15:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.541 21:15:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.542 21:15:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:15.542 21:15:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.542 21:15:53 -- target/tls.sh@28 -- # bdevperf_pid=2404057 00:20:15.542 21:15:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.542 21:15:53 -- target/tls.sh@31 -- # waitforlisten 2404057 /var/tmp/bdevperf.sock 00:20:15.542 21:15:53 -- common/autotest_common.sh@819 -- # '[' -z 2404057 ']' 00:20:15.542 21:15:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.542 21:15:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.542 21:15:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.542 21:15:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.542 21:15:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.542 21:15:53 -- common/autotest_common.sh@10 -- # set +x 00:20:15.542 [2024-06-08 21:15:53.403807] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:15.542 [2024-06-08 21:15:53.403858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404057 ] 00:20:15.542 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.542 [2024-06-08 21:15:53.453581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.542 [2024-06-08 21:15:53.504487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.113 21:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:16.113 21:15:54 -- common/autotest_common.sh@852 -- # return 0 00:20:16.113 21:15:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:16.374 [2024-06-08 21:15:54.317457] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.374 TLSTESTn1 00:20:16.374 21:15:54 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.634 Running I/O for 10 seconds... 00:20:26.637 00:20:26.637 Latency(us) 00:20:26.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.637 Verification LBA range: start 0x0 length 0x2000 00:20:26.637 TLSTESTn1 : 10.06 1886.23 7.37 0.00 0.00 67720.99 5652.48 80390.83 00:20:26.637 =================================================================================================================== 00:20:26.637 Total : 1886.23 7.37 0.00 0.00 67720.99 5652.48 80390.83 00:20:26.637 0 00:20:26.637 21:16:04 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.637 21:16:04 -- target/tls.sh@45 -- # killprocess 2404057 00:20:26.637 21:16:04 -- common/autotest_common.sh@926 -- # '[' -z 2404057 ']' 00:20:26.637 21:16:04 -- common/autotest_common.sh@930 -- # kill -0 2404057 00:20:26.637 21:16:04 -- common/autotest_common.sh@931 -- # uname 00:20:26.637 21:16:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.637 21:16:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2404057 00:20:26.637 21:16:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:26.637 21:16:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:26.637 21:16:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2404057' 00:20:26.637 killing process with pid 2404057 00:20:26.637 21:16:04 -- common/autotest_common.sh@945 -- # kill 2404057 00:20:26.637 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.637 00:20:26.637 Latency(us) 00:20:26.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.637 =================================================================================================================== 00:20:26.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.637 21:16:04 -- common/autotest_common.sh@950 -- # wait 2404057 00:20:26.898 21:16:04 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.898 21:16:04 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.898 21:16:04 -- common/autotest_common.sh@640 -- # local es=0 00:20:26.898 21:16:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.898 21:16:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:26.898 21:16:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:26.898 21:16:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:26.898 21:16:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:26.898 21:16:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.898 21:16:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.898 21:16:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.898 21:16:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.898 21:16:04 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:26.898 21:16:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.898 21:16:04 -- target/tls.sh@28 -- # bdevperf_pid=2406414 00:20:26.898 21:16:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.898 21:16:04 -- target/tls.sh@31 -- # waitforlisten 2406414 /var/tmp/bdevperf.sock 00:20:26.898 21:16:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.898 21:16:04 -- common/autotest_common.sh@819 -- # '[' -z 2406414 ']' 00:20:26.898 21:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.898 21:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.898 21:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.898 21:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.898 21:16:04 -- common/autotest_common.sh@10 -- # set +x 00:20:26.898 [2024-06-08 21:16:04.811704] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:26.898 [2024-06-08 21:16:04.811758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406414 ] 00:20:26.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.898 [2024-06-08 21:16:04.861762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.898 [2024-06-08 21:16:04.911154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.840 21:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.840 21:16:05 -- common/autotest_common.sh@852 -- # return 0 00:20:27.840 21:16:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.840 [2024-06-08 21:16:05.719913] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.840 [2024-06-08 21:16:05.719949] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:27.840 request: 00:20:27.840 { 00:20:27.840 "name": "TLSTEST", 00:20:27.840 "trtype": "tcp", 00:20:27.840 "traddr": "10.0.0.2", 00:20:27.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.840 "adrfam": "ipv4", 00:20:27.840 "trsvcid": "4420", 00:20:27.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.840 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:27.840 "method": "bdev_nvme_attach_controller", 00:20:27.840 "req_id": 1 00:20:27.840 } 00:20:27.840 Got JSON-RPC error response 00:20:27.840 response: 00:20:27.840 { 00:20:27.840 "code": -22, 00:20:27.840 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:27.840 } 00:20:27.840 21:16:05 -- target/tls.sh@36 -- # killprocess 2406414 00:20:27.840 21:16:05 -- common/autotest_common.sh@926 -- # '[' -z 2406414 ']' 00:20:27.840 21:16:05 -- common/autotest_common.sh@930 -- # kill -0 2406414 00:20:27.840 21:16:05 -- common/autotest_common.sh@931 -- # uname 00:20:27.840 21:16:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.840 21:16:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2406414 00:20:27.840 21:16:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:27.840 21:16:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:27.840 21:16:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2406414' 00:20:27.840 killing process with pid 2406414 00:20:27.840 21:16:05 -- common/autotest_common.sh@945 -- # kill 2406414 00:20:27.840 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.840 00:20:27.840 Latency(us) 00:20:27.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.840 =================================================================================================================== 00:20:27.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.840 21:16:05 -- common/autotest_common.sh@950 -- # wait 2406414 00:20:27.840 21:16:05 -- target/tls.sh@37 -- # return 1 00:20:27.840 21:16:05 -- common/autotest_common.sh@643 -- # es=1 00:20:27.840 21:16:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:27.840 21:16:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:27.840 21:16:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:27.840 21:16:05 -- target/tls.sh@183 -- # killprocess 2403687 00:20:27.840 21:16:05 -- common/autotest_common.sh@926 -- # '[' -z 2403687 ']' 00:20:27.840 21:16:05 -- common/autotest_common.sh@930 -- # kill -0 2403687 00:20:27.840 21:16:05 -- common/autotest_common.sh@931 -- # uname 00:20:27.840 21:16:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.840 21:16:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2403687 00:20:28.101 21:16:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:28.101 21:16:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:28.101 21:16:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2403687' 00:20:28.101 killing process with pid 2403687 00:20:28.101 21:16:05 -- common/autotest_common.sh@945 -- # kill 2403687 00:20:28.101 21:16:05 -- common/autotest_common.sh@950 -- # wait 2403687 00:20:28.101 21:16:06 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:28.101 21:16:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:28.101 21:16:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:28.101 21:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:28.101 21:16:06 -- nvmf/common.sh@469 -- # nvmfpid=2406718 00:20:28.101 21:16:06 -- nvmf/common.sh@470 -- # waitforlisten 2406718 00:20:28.101 21:16:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.101 21:16:06 -- common/autotest_common.sh@819 -- # '[' -z 2406718 ']' 00:20:28.101 21:16:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.101 21:16:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:28.101 21:16:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.101 21:16:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:28.101 21:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:28.101 [2024-06-08 21:16:06.145301] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:28.101 [2024-06-08 21:16:06.145357] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.101 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.361 [2024-06-08 21:16:06.229576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.361 [2024-06-08 21:16:06.289040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:28.361 [2024-06-08 21:16:06.289144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.361 [2024-06-08 21:16:06.289150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.361 [2024-06-08 21:16:06.289155] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.361 [2024-06-08 21:16:06.289172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.932 21:16:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:28.932 21:16:06 -- common/autotest_common.sh@852 -- # return 0 00:20:28.932 21:16:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:28.932 21:16:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:28.932 21:16:06 -- common/autotest_common.sh@10 -- # set +x 00:20:28.932 21:16:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.932 21:16:06 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.932 21:16:06 -- common/autotest_common.sh@640 -- # local es=0 00:20:28.932 21:16:06 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.932 21:16:06 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:28.932 21:16:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.932 21:16:06 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:28.932 21:16:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.932 21:16:06 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.932 21:16:06 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.932 21:16:06 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.192 [2024-06-08 21:16:07.068766] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.192 21:16:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.192 21:16:07 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.453 [2024-06-08 21:16:07.369513] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.453 [2024-06-08 21:16:07.369694] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.453 21:16:07 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.453 malloc0 00:20:29.713 21:16:07 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.713 21:16:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.974 [2024-06-08 21:16:07.820564] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:29.974 [2024-06-08 21:16:07.820584] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:29.974 [2024-06-08 21:16:07.820597] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:29.974 request: 00:20:29.974 { 00:20:29.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.974 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.974 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:29.974 "method": "nvmf_subsystem_add_host", 00:20:29.974 "req_id": 1 00:20:29.974 } 00:20:29.974 Got JSON-RPC error response 00:20:29.974 response: 00:20:29.974 { 00:20:29.974 "code": -32603, 00:20:29.974 "message": "Internal error" 00:20:29.974 } 00:20:29.974 21:16:07 -- common/autotest_common.sh@643 -- # es=1 00:20:29.974 21:16:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:29.974 21:16:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:29.974 21:16:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:29.974 21:16:07 -- target/tls.sh@189 -- # killprocess 2406718 00:20:29.974 21:16:07 -- common/autotest_common.sh@926 -- # '[' -z 2406718 ']' 00:20:29.974 21:16:07 -- common/autotest_common.sh@930 -- # kill -0 2406718 00:20:29.974 21:16:07 -- common/autotest_common.sh@931 -- # uname 00:20:29.974 21:16:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:29.974 21:16:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2406718 00:20:29.974 21:16:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:29.974 21:16:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:29.974 21:16:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2406718' 00:20:29.974 killing process with pid 2406718 00:20:29.974 21:16:07 -- common/autotest_common.sh@945 -- # kill 2406718 00:20:29.974 21:16:07 -- common/autotest_common.sh@950 -- # wait 2406718 00:20:29.974 21:16:08 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.974 21:16:08 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:29.974 21:16:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:29.974 21:16:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:29.974 21:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:29.974 21:16:08 -- nvmf/common.sh@469 -- # nvmfpid=2407140 00:20:29.974 21:16:08 -- nvmf/common.sh@470 -- # waitforlisten 2407140 00:20:29.974 21:16:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:29.974 21:16:08 -- common/autotest_common.sh@819 -- # '[' -z 2407140 ']' 00:20:29.974 21:16:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.974 21:16:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:29.974 21:16:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.974 21:16:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:29.974 21:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:29.974 [2024-06-08 21:16:08.064348] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:29.974 [2024-06-08 21:16:08.064406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.235 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.235 [2024-06-08 21:16:08.144290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.236 [2024-06-08 21:16:08.196203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.236 [2024-06-08 21:16:08.196292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.236 [2024-06-08 21:16:08.196298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.236 [2024-06-08 21:16:08.196303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.236 [2024-06-08 21:16:08.196317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.807 21:16:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:30.807 21:16:08 -- common/autotest_common.sh@852 -- # return 0 00:20:30.807 21:16:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:30.807 21:16:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:30.807 21:16:08 -- common/autotest_common.sh@10 -- # set +x 00:20:30.807 21:16:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.807 21:16:08 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:30.807 21:16:08 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:30.807 21:16:08 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.067 [2024-06-08 21:16:08.985981] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.067 21:16:08 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.067 21:16:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.328 [2024-06-08 21:16:09.270680] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.328 [2024-06-08 21:16:09.270861] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.328 21:16:09 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.589 malloc0 00:20:31.589 21:16:09 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:31.589 21:16:09 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.850 21:16:09 -- target/tls.sh@197 -- # bdevperf_pid=2407501 00:20:31.850 21:16:09 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.850 21:16:09 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.850 21:16:09 -- target/tls.sh@200 -- # waitforlisten 2407501 /var/tmp/bdevperf.sock 00:20:31.850 21:16:09 -- common/autotest_common.sh@819 -- # '[' -z 2407501 ']' 00:20:31.850 21:16:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.850 21:16:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:31.850 21:16:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.850 21:16:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:31.850 21:16:09 -- common/autotest_common.sh@10 -- # set +x 00:20:31.850 [2024-06-08 21:16:09.802019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:31.850 [2024-06-08 21:16:09.802069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407501 ] 00:20:31.850 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.850 [2024-06-08 21:16:09.850247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.850 [2024-06-08 21:16:09.901425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.792 21:16:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:32.792 21:16:10 -- common/autotest_common.sh@852 -- # return 0 00:20:32.792 21:16:10 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.792 [2024-06-08 21:16:10.709878] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.792 TLSTESTn1 00:20:32.792 21:16:10 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:33.054 21:16:11 -- target/tls.sh@205 -- # tgtconf='{ 00:20:33.054 "subsystems": [ 00:20:33.054 { 00:20:33.054 "subsystem": "iobuf", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "iobuf_set_options", 00:20:33.054 "params": { 00:20:33.054 "small_pool_count": 8192, 00:20:33.054 "large_pool_count": 1024, 00:20:33.054 "small_bufsize": 8192, 00:20:33.054 "large_bufsize": 135168 00:20:33.054 } 00:20:33.054 } 00:20:33.054 ] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "sock", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "sock_impl_set_options", 00:20:33.054 "params": { 00:20:33.054 "impl_name": "posix", 00:20:33.054 "recv_buf_size": 2097152, 00:20:33.054 "send_buf_size": 2097152, 00:20:33.054 "enable_recv_pipe": true, 00:20:33.054 "enable_quickack": false, 00:20:33.054 "enable_placement_id": 0, 00:20:33.054 "enable_zerocopy_send_server": true, 00:20:33.054 "enable_zerocopy_send_client": false, 00:20:33.054 "zerocopy_threshold": 0, 00:20:33.054 "tls_version": 0, 00:20:33.054 "enable_ktls": false 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "sock_impl_set_options", 00:20:33.054 "params": { 00:20:33.054 "impl_name": "ssl", 00:20:33.054 "recv_buf_size": 4096, 00:20:33.054 "send_buf_size": 4096, 00:20:33.054 "enable_recv_pipe": true, 00:20:33.054 "enable_quickack": false, 00:20:33.054 "enable_placement_id": 0, 00:20:33.054 "enable_zerocopy_send_server": true, 00:20:33.054 "enable_zerocopy_send_client": false, 00:20:33.054 "zerocopy_threshold": 0, 00:20:33.054 "tls_version": 0, 00:20:33.054 "enable_ktls": false 00:20:33.054 } 00:20:33.054 } 00:20:33.054 ] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "vmd", 00:20:33.054 "config": [] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "accel", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "accel_set_options", 00:20:33.054 "params": { 00:20:33.054 "small_cache_size": 128, 00:20:33.054 "large_cache_size": 16, 00:20:33.054 "task_count": 2048, 00:20:33.054 "sequence_count": 2048, 00:20:33.054 "buf_count": 2048 00:20:33.054 } 00:20:33.054 } 00:20:33.054 ] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "bdev", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "bdev_set_options", 00:20:33.054 "params": { 00:20:33.054 "bdev_io_pool_size": 65535, 00:20:33.054 "bdev_io_cache_size": 256, 00:20:33.054 "bdev_auto_examine": true, 00:20:33.054 "iobuf_small_cache_size": 128, 00:20:33.054 "iobuf_large_cache_size": 16 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_raid_set_options", 00:20:33.054 "params": { 00:20:33.054 "process_window_size_kb": 1024 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_iscsi_set_options", 00:20:33.054 "params": { 00:20:33.054 "timeout_sec": 30 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_nvme_set_options", 00:20:33.054 "params": { 00:20:33.054 "action_on_timeout": "none", 00:20:33.054 "timeout_us": 0, 00:20:33.054 "timeout_admin_us": 0, 00:20:33.054 "keep_alive_timeout_ms": 10000, 00:20:33.054 "transport_retry_count": 4, 00:20:33.054 "arbitration_burst": 0, 00:20:33.054 "low_priority_weight": 0, 00:20:33.054 "medium_priority_weight": 0, 00:20:33.054 "high_priority_weight": 0, 00:20:33.054 "nvme_adminq_poll_period_us": 10000, 00:20:33.054 "nvme_ioq_poll_period_us": 0, 00:20:33.054 "io_queue_requests": 0, 00:20:33.054 "delay_cmd_submit": true, 00:20:33.054 "bdev_retry_count": 3, 00:20:33.054 "transport_ack_timeout": 0, 00:20:33.054 "ctrlr_loss_timeout_sec": 0, 00:20:33.054 "reconnect_delay_sec": 0, 00:20:33.054 "fast_io_fail_timeout_sec": 0, 00:20:33.054 "generate_uuids": false, 00:20:33.054 "transport_tos": 0, 00:20:33.054 "io_path_stat": false, 00:20:33.054 "allow_accel_sequence": false 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_nvme_set_hotplug", 00:20:33.054 "params": { 00:20:33.054 "period_us": 100000, 00:20:33.054 "enable": false 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_malloc_create", 00:20:33.054 "params": { 00:20:33.054 "name": "malloc0", 00:20:33.054 "num_blocks": 8192, 00:20:33.054 "block_size": 4096, 00:20:33.054 "physical_block_size": 4096, 00:20:33.054 "uuid": "281fab18-2276-44e1-ad90-3d145755fe80", 00:20:33.054 "optimal_io_boundary": 0 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "bdev_wait_for_examine" 00:20:33.054 } 00:20:33.054 ] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "nbd", 00:20:33.054 "config": [] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "scheduler", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "framework_set_scheduler", 00:20:33.054 "params": { 00:20:33.054 "name": "static" 00:20:33.054 } 00:20:33.054 } 00:20:33.054 ] 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "subsystem": "nvmf", 00:20:33.054 "config": [ 00:20:33.054 { 00:20:33.054 "method": "nvmf_set_config", 00:20:33.054 "params": { 00:20:33.054 "discovery_filter": "match_any", 00:20:33.054 "admin_cmd_passthru": { 00:20:33.054 "identify_ctrlr": false 00:20:33.054 } 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "nvmf_set_max_subsystems", 00:20:33.054 "params": { 00:20:33.054 "max_subsystems": 1024 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "nvmf_set_crdt", 00:20:33.054 "params": { 00:20:33.054 "crdt1": 0, 00:20:33.054 "crdt2": 0, 00:20:33.054 "crdt3": 0 00:20:33.054 } 00:20:33.054 }, 00:20:33.054 { 00:20:33.054 "method": "nvmf_create_transport", 00:20:33.054 "params": { 00:20:33.054 "trtype": "TCP", 00:20:33.054 "max_queue_depth": 128, 00:20:33.054 "max_io_qpairs_per_ctrlr": 127, 00:20:33.054 "in_capsule_data_size": 4096, 00:20:33.054 "max_io_size": 131072, 00:20:33.054 "io_unit_size": 131072, 00:20:33.054 "max_aq_depth": 128, 00:20:33.055 "num_shared_buffers": 511, 00:20:33.055 "buf_cache_size": 4294967295, 00:20:33.055 "dif_insert_or_strip": false, 00:20:33.055 "zcopy": false, 00:20:33.055 "c2h_success": false, 00:20:33.055 "sock_priority": 0, 00:20:33.055 "abort_timeout_sec": 1 00:20:33.055 } 00:20:33.055 }, 00:20:33.055 { 00:20:33.055 "method": "nvmf_create_subsystem", 00:20:33.055 "params": { 00:20:33.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.055 "allow_any_host": false, 00:20:33.055 "serial_number": "SPDK00000000000001", 00:20:33.055 "model_number": "SPDK bdev Controller", 00:20:33.055 "max_namespaces": 10, 00:20:33.055 "min_cntlid": 1, 00:20:33.055 "max_cntlid": 65519, 00:20:33.055 "ana_reporting": false 00:20:33.055 } 00:20:33.055 }, 00:20:33.055 { 00:20:33.055 "method": "nvmf_subsystem_add_host", 00:20:33.055 "params": { 00:20:33.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.055 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.055 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:33.055 } 00:20:33.055 }, 00:20:33.055 { 00:20:33.055 "method": "nvmf_subsystem_add_ns", 00:20:33.055 "params": { 00:20:33.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.055 "namespace": { 00:20:33.055 "nsid": 1, 00:20:33.055 "bdev_name": "malloc0", 00:20:33.055 "nguid": "281FAB18227644E1AD903D145755FE80", 00:20:33.055 "uuid": "281fab18-2276-44e1-ad90-3d145755fe80" 00:20:33.055 } 00:20:33.055 } 00:20:33.055 }, 00:20:33.055 { 00:20:33.055 "method": "nvmf_subsystem_add_listener", 00:20:33.055 "params": { 00:20:33.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.055 "listen_address": { 00:20:33.055 "trtype": "TCP", 00:20:33.055 "adrfam": "IPv4", 00:20:33.055 "traddr": "10.0.0.2", 00:20:33.055 "trsvcid": "4420" 00:20:33.055 }, 00:20:33.055 "secure_channel": true 00:20:33.055 } 00:20:33.055 } 00:20:33.055 ] 00:20:33.055 } 00:20:33.055 ] 00:20:33.055 }' 00:20:33.055 21:16:11 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:33.316 21:16:11 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:33.316 "subsystems": [ 00:20:33.316 { 00:20:33.316 "subsystem": "iobuf", 00:20:33.316 "config": [ 00:20:33.316 { 00:20:33.316 "method": "iobuf_set_options", 00:20:33.316 "params": { 00:20:33.316 "small_pool_count": 8192, 00:20:33.316 "large_pool_count": 1024, 00:20:33.316 "small_bufsize": 8192, 00:20:33.316 "large_bufsize": 135168 00:20:33.316 } 00:20:33.316 } 00:20:33.316 ] 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "subsystem": "sock", 00:20:33.316 "config": [ 00:20:33.316 { 00:20:33.316 "method": "sock_impl_set_options", 00:20:33.316 "params": { 00:20:33.316 "impl_name": "posix", 00:20:33.316 "recv_buf_size": 2097152, 00:20:33.316 "send_buf_size": 2097152, 00:20:33.316 "enable_recv_pipe": true, 00:20:33.316 "enable_quickack": false, 00:20:33.316 "enable_placement_id": 0, 00:20:33.316 "enable_zerocopy_send_server": true, 00:20:33.316 "enable_zerocopy_send_client": false, 00:20:33.316 "zerocopy_threshold": 0, 00:20:33.316 "tls_version": 0, 00:20:33.316 "enable_ktls": false 00:20:33.316 } 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "method": "sock_impl_set_options", 00:20:33.316 "params": { 00:20:33.316 "impl_name": "ssl", 00:20:33.316 "recv_buf_size": 4096, 00:20:33.316 "send_buf_size": 4096, 00:20:33.316 "enable_recv_pipe": true, 00:20:33.316 "enable_quickack": false, 00:20:33.316 "enable_placement_id": 0, 00:20:33.316 "enable_zerocopy_send_server": true, 00:20:33.316 "enable_zerocopy_send_client": false, 00:20:33.316 "zerocopy_threshold": 0, 00:20:33.316 "tls_version": 0, 00:20:33.316 "enable_ktls": false 00:20:33.316 } 00:20:33.316 } 00:20:33.316 ] 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "subsystem": "vmd", 00:20:33.316 "config": [] 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "subsystem": "accel", 00:20:33.316 "config": [ 00:20:33.316 { 00:20:33.316 "method": "accel_set_options", 00:20:33.316 "params": { 00:20:33.316 "small_cache_size": 128, 00:20:33.316 "large_cache_size": 16, 00:20:33.316 "task_count": 2048, 00:20:33.316 "sequence_count": 2048, 00:20:33.316 "buf_count": 2048 00:20:33.316 } 00:20:33.316 } 00:20:33.316 ] 00:20:33.316 }, 00:20:33.316 { 00:20:33.316 "subsystem": "bdev", 00:20:33.316 "config": [ 00:20:33.316 { 00:20:33.316 "method": "bdev_set_options", 00:20:33.317 "params": { 00:20:33.317 "bdev_io_pool_size": 65535, 00:20:33.317 "bdev_io_cache_size": 256, 00:20:33.317 "bdev_auto_examine": true, 00:20:33.317 "iobuf_small_cache_size": 128, 00:20:33.317 "iobuf_large_cache_size": 16 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_raid_set_options", 00:20:33.317 "params": { 00:20:33.317 "process_window_size_kb": 1024 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_iscsi_set_options", 00:20:33.317 "params": { 00:20:33.317 "timeout_sec": 30 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_nvme_set_options", 00:20:33.317 "params": { 00:20:33.317 "action_on_timeout": "none", 00:20:33.317 "timeout_us": 0, 00:20:33.317 "timeout_admin_us": 0, 00:20:33.317 "keep_alive_timeout_ms": 10000, 00:20:33.317 "transport_retry_count": 4, 00:20:33.317 "arbitration_burst": 0, 00:20:33.317 "low_priority_weight": 0, 00:20:33.317 "medium_priority_weight": 0, 00:20:33.317 "high_priority_weight": 0, 00:20:33.317 "nvme_adminq_poll_period_us": 10000, 00:20:33.317 "nvme_ioq_poll_period_us": 0, 00:20:33.317 "io_queue_requests": 512, 00:20:33.317 "delay_cmd_submit": true, 00:20:33.317 "bdev_retry_count": 3, 00:20:33.317 "transport_ack_timeout": 0, 00:20:33.317 "ctrlr_loss_timeout_sec": 0, 00:20:33.317 "reconnect_delay_sec": 0, 00:20:33.317 "fast_io_fail_timeout_sec": 0, 00:20:33.317 "generate_uuids": false, 00:20:33.317 "transport_tos": 0, 00:20:33.317 "io_path_stat": false, 00:20:33.317 "allow_accel_sequence": false 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_nvme_attach_controller", 00:20:33.317 "params": { 00:20:33.317 "name": "TLSTEST", 00:20:33.317 "trtype": "TCP", 00:20:33.317 "adrfam": "IPv4", 00:20:33.317 "traddr": "10.0.0.2", 00:20:33.317 "trsvcid": "4420", 00:20:33.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.317 "prchk_reftag": false, 00:20:33.317 "prchk_guard": false, 00:20:33.317 "ctrlr_loss_timeout_sec": 0, 00:20:33.317 "reconnect_delay_sec": 0, 00:20:33.317 "fast_io_fail_timeout_sec": 0, 00:20:33.317 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:33.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.317 "hdgst": false, 00:20:33.317 "ddgst": false 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_nvme_set_hotplug", 00:20:33.317 "params": { 00:20:33.317 "period_us": 100000, 00:20:33.317 "enable": false 00:20:33.317 } 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "method": "bdev_wait_for_examine" 00:20:33.317 } 00:20:33.317 ] 00:20:33.317 }, 00:20:33.317 { 00:20:33.317 "subsystem": "nbd", 00:20:33.317 "config": [] 00:20:33.317 } 00:20:33.317 ] 00:20:33.317 }' 00:20:33.317 21:16:11 -- target/tls.sh@208 -- # killprocess 2407501 00:20:33.317 21:16:11 -- common/autotest_common.sh@926 -- # '[' -z 2407501 ']' 00:20:33.317 21:16:11 -- common/autotest_common.sh@930 -- # kill -0 2407501 00:20:33.317 21:16:11 -- common/autotest_common.sh@931 -- # uname 00:20:33.317 21:16:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.317 21:16:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2407501 00:20:33.317 21:16:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:33.317 21:16:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:33.317 21:16:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2407501' 00:20:33.317 killing process with pid 2407501 00:20:33.317 21:16:11 -- common/autotest_common.sh@945 -- # kill 2407501 00:20:33.317 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.317 00:20:33.317 Latency(us) 00:20:33.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.317 =================================================================================================================== 00:20:33.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.317 21:16:11 -- common/autotest_common.sh@950 -- # wait 2407501 00:20:33.578 21:16:11 -- target/tls.sh@209 -- # killprocess 2407140 00:20:33.578 21:16:11 -- common/autotest_common.sh@926 -- # '[' -z 2407140 ']' 00:20:33.578 21:16:11 -- common/autotest_common.sh@930 -- # kill -0 2407140 00:20:33.578 21:16:11 -- common/autotest_common.sh@931 -- # uname 00:20:33.578 21:16:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:33.578 21:16:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2407140 00:20:33.579 21:16:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:33.579 21:16:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:33.579 21:16:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2407140' 00:20:33.579 killing process with pid 2407140 00:20:33.579 21:16:11 -- common/autotest_common.sh@945 -- # kill 2407140 00:20:33.579 21:16:11 -- common/autotest_common.sh@950 -- # wait 2407140 00:20:33.579 21:16:11 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:33.579 21:16:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:33.579 21:16:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:33.579 21:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:33.579 21:16:11 -- target/tls.sh@212 -- # echo '{ 00:20:33.579 "subsystems": [ 00:20:33.579 { 00:20:33.579 "subsystem": "iobuf", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "iobuf_set_options", 00:20:33.579 "params": { 00:20:33.579 "small_pool_count": 8192, 00:20:33.579 "large_pool_count": 1024, 00:20:33.579 "small_bufsize": 8192, 00:20:33.579 "large_bufsize": 135168 00:20:33.579 } 00:20:33.579 } 00:20:33.579 ] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "sock", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "sock_impl_set_options", 00:20:33.579 "params": { 00:20:33.579 "impl_name": "posix", 00:20:33.579 "recv_buf_size": 2097152, 00:20:33.579 "send_buf_size": 2097152, 00:20:33.579 "enable_recv_pipe": true, 00:20:33.579 "enable_quickack": false, 00:20:33.579 "enable_placement_id": 0, 00:20:33.579 "enable_zerocopy_send_server": true, 00:20:33.579 "enable_zerocopy_send_client": false, 00:20:33.579 "zerocopy_threshold": 0, 00:20:33.579 "tls_version": 0, 00:20:33.579 "enable_ktls": false 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "sock_impl_set_options", 00:20:33.579 "params": { 00:20:33.579 "impl_name": "ssl", 00:20:33.579 "recv_buf_size": 4096, 00:20:33.579 "send_buf_size": 4096, 00:20:33.579 "enable_recv_pipe": true, 00:20:33.579 "enable_quickack": false, 00:20:33.579 "enable_placement_id": 0, 00:20:33.579 "enable_zerocopy_send_server": true, 00:20:33.579 "enable_zerocopy_send_client": false, 00:20:33.579 "zerocopy_threshold": 0, 00:20:33.579 "tls_version": 0, 00:20:33.579 "enable_ktls": false 00:20:33.579 } 00:20:33.579 } 00:20:33.579 ] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "vmd", 00:20:33.579 "config": [] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "accel", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "accel_set_options", 00:20:33.579 "params": { 00:20:33.579 "small_cache_size": 128, 00:20:33.579 "large_cache_size": 16, 00:20:33.579 "task_count": 2048, 00:20:33.579 "sequence_count": 2048, 00:20:33.579 "buf_count": 2048 00:20:33.579 } 00:20:33.579 } 00:20:33.579 ] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "bdev", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "bdev_set_options", 00:20:33.579 "params": { 00:20:33.579 "bdev_io_pool_size": 65535, 00:20:33.579 "bdev_io_cache_size": 256, 00:20:33.579 "bdev_auto_examine": true, 00:20:33.579 "iobuf_small_cache_size": 128, 00:20:33.579 "iobuf_large_cache_size": 16 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_raid_set_options", 00:20:33.579 "params": { 00:20:33.579 "process_window_size_kb": 1024 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_iscsi_set_options", 00:20:33.579 "params": { 00:20:33.579 "timeout_sec": 30 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_nvme_set_options", 00:20:33.579 "params": { 00:20:33.579 "action_on_timeout": "none", 00:20:33.579 "timeout_us": 0, 00:20:33.579 "timeout_admin_us": 0, 00:20:33.579 "keep_alive_timeout_ms": 10000, 00:20:33.579 "transport_retry_count": 4, 00:20:33.579 "arbitration_burst": 0, 00:20:33.579 "low_priority_weight": 0, 00:20:33.579 "medium_priority_weight": 0, 00:20:33.579 "high_priority_weight": 0, 00:20:33.579 "nvme_adminq_poll_period_us": 10000, 00:20:33.579 "nvme_ioq_poll_period_us": 0, 00:20:33.579 "io_queue_requests": 0, 00:20:33.579 "delay_cmd_submit": true, 00:20:33.579 "bdev_retry_count": 3, 00:20:33.579 "transport_ack_timeout": 0, 00:20:33.579 "ctrlr_loss_timeout_sec": 0, 00:20:33.579 "reconnect_delay_sec": 0, 00:20:33.579 "fast_io_fail_timeout_sec": 0, 00:20:33.579 "generate_uuids": false, 00:20:33.579 "transport_tos": 0, 00:20:33.579 "io_path_stat": false, 00:20:33.579 "allow_accel_sequence": false 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_nvme_set_hotplug", 00:20:33.579 "params": { 00:20:33.579 "period_us": 100000, 00:20:33.579 "enable": false 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_malloc_create", 00:20:33.579 "params": { 00:20:33.579 "name": "malloc0", 00:20:33.579 "num_blocks": 8192, 00:20:33.579 "block_size": 4096, 00:20:33.579 "physical_block_size": 4096, 00:20:33.579 "uuid": "281fab18-2276-44e1-ad90-3d145755fe80", 00:20:33.579 "optimal_io_boundary": 0 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "bdev_wait_for_examine" 00:20:33.579 } 00:20:33.579 ] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "nbd", 00:20:33.579 "config": [] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "scheduler", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "framework_set_scheduler", 00:20:33.579 "params": { 00:20:33.579 "name": "static" 00:20:33.579 } 00:20:33.579 } 00:20:33.579 ] 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "subsystem": "nvmf", 00:20:33.579 "config": [ 00:20:33.579 { 00:20:33.579 "method": "nvmf_set_config", 00:20:33.579 "params": { 00:20:33.579 "discovery_filter": "match_any", 00:20:33.579 "admin_cmd_passthru": { 00:20:33.579 "identify_ctrlr": false 00:20:33.579 } 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "nvmf_set_max_subsystems", 00:20:33.579 "params": { 00:20:33.579 "max_subsystems": 1024 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "nvmf_set_crdt", 00:20:33.579 "params": { 00:20:33.579 "crdt1": 0, 00:20:33.579 "crdt2": 0, 00:20:33.579 "crdt3": 0 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "nvmf_create_transport", 00:20:33.579 "params": { 00:20:33.579 "trtype": "TCP", 00:20:33.579 "max_queue_depth": 128, 00:20:33.579 "max_io_qpairs_per_ctrlr": 127, 00:20:33.579 "in_capsule_data_size": 4096, 00:20:33.579 "max_io_size": 131072, 00:20:33.579 "io_unit_size": 131072, 00:20:33.579 "max_aq_depth": 128, 00:20:33.579 "num_shared_buffers": 511, 00:20:33.579 "buf_cache_size": 4294967295, 00:20:33.579 "dif_insert_or_strip": false, 00:20:33.579 "zcopy": false, 00:20:33.579 "c2h_success": false, 00:20:33.579 "sock_priority": 0, 00:20:33.579 "abort_timeout_sec": 1 00:20:33.579 } 00:20:33.579 }, 00:20:33.579 { 00:20:33.579 "method": "nvmf_create_subsystem", 00:20:33.580 "params": { 00:20:33.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.580 "allow_any_host": false, 00:20:33.580 "serial_number": "SPDK00000000000001", 00:20:33.580 "model_number": "SPDK bdev Controller", 00:20:33.580 "max_namespaces": 10, 00:20:33.580 "min_cntlid": 1, 00:20:33.580 "max_cntlid": 65519, 00:20:33.580 "ana_reporting": false 00:20:33.580 } 00:20:33.580 }, 00:20:33.580 { 00:20:33.580 "method": "nvmf_subsystem_add_host", 00:20:33.580 "params": { 00:20:33.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.580 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.580 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:33.580 } 00:20:33.580 }, 00:20:33.580 { 00:20:33.580 "method": "nvmf_subsystem_add_ns", 00:20:33.580 "params": { 00:20:33.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.580 "namespace": { 00:20:33.580 "nsid": 1, 00:20:33.580 "bdev_name": "malloc0", 00:20:33.580 "nguid": "281FAB18227644E1AD903D145755FE80", 00:20:33.580 "uuid": "281fab18-2276-44e1-ad90-3d145755fe80" 00:20:33.580 } 00:20:33.580 } 00:20:33.580 }, 00:20:33.580 { 00:20:33.580 "method": "nvmf_subsystem_add_listener", 00:20:33.580 "params": { 00:20:33.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.580 "listen_address": { 00:20:33.580 "trtype": "TCP", 00:20:33.580 "adrfam": "IPv4", 00:20:33.580 "traddr": "10.0.0.2", 00:20:33.580 "trsvcid": "4420" 00:20:33.580 }, 00:20:33.580 "secure_channel": true 00:20:33.580 } 00:20:33.580 } 00:20:33.580 ] 00:20:33.580 } 00:20:33.580 ] 00:20:33.580 }' 00:20:33.580 21:16:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:33.580 21:16:11 -- nvmf/common.sh@469 -- # nvmfpid=2407863 00:20:33.580 21:16:11 -- nvmf/common.sh@470 -- # waitforlisten 2407863 00:20:33.580 21:16:11 -- common/autotest_common.sh@819 -- # '[' -z 2407863 ']' 00:20:33.580 21:16:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.580 21:16:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:33.580 21:16:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.580 21:16:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:33.580 21:16:11 -- common/autotest_common.sh@10 -- # set +x 00:20:33.580 [2024-06-08 21:16:11.632132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:33.580 [2024-06-08 21:16:11.632175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.580 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.840 [2024-06-08 21:16:11.702939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.841 [2024-06-08 21:16:11.754635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:33.841 [2024-06-08 21:16:11.754723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.841 [2024-06-08 21:16:11.754729] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.841 [2024-06-08 21:16:11.754734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.841 [2024-06-08 21:16:11.754752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.841 [2024-06-08 21:16:11.929133] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.101 [2024-06-08 21:16:11.961163] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.101 [2024-06-08 21:16:11.961346] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.361 21:16:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:34.361 21:16:12 -- common/autotest_common.sh@852 -- # return 0 00:20:34.361 21:16:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:34.361 21:16:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:34.361 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.623 21:16:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.623 21:16:12 -- target/tls.sh@216 -- # bdevperf_pid=2407917 00:20:34.623 21:16:12 -- target/tls.sh@217 -- # waitforlisten 2407917 /var/tmp/bdevperf.sock 00:20:34.623 21:16:12 -- common/autotest_common.sh@819 -- # '[' -z 2407917 ']' 00:20:34.623 21:16:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.623 21:16:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.623 21:16:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.623 21:16:12 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:34.623 21:16:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.623 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:20:34.623 21:16:12 -- target/tls.sh@213 -- # echo '{ 00:20:34.623 "subsystems": [ 00:20:34.623 { 00:20:34.623 "subsystem": "iobuf", 00:20:34.623 "config": [ 00:20:34.623 { 00:20:34.623 "method": "iobuf_set_options", 00:20:34.623 "params": { 00:20:34.623 "small_pool_count": 8192, 00:20:34.623 "large_pool_count": 1024, 00:20:34.623 "small_bufsize": 8192, 00:20:34.623 "large_bufsize": 135168 00:20:34.623 } 00:20:34.623 } 00:20:34.623 ] 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "subsystem": "sock", 00:20:34.623 "config": [ 00:20:34.623 { 00:20:34.623 "method": "sock_impl_set_options", 00:20:34.623 "params": { 00:20:34.623 "impl_name": "posix", 00:20:34.623 "recv_buf_size": 2097152, 00:20:34.623 "send_buf_size": 2097152, 00:20:34.623 "enable_recv_pipe": true, 00:20:34.623 "enable_quickack": false, 00:20:34.623 "enable_placement_id": 0, 00:20:34.623 "enable_zerocopy_send_server": true, 00:20:34.623 "enable_zerocopy_send_client": false, 00:20:34.623 "zerocopy_threshold": 0, 00:20:34.623 "tls_version": 0, 00:20:34.623 "enable_ktls": false 00:20:34.623 } 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "method": "sock_impl_set_options", 00:20:34.623 "params": { 00:20:34.623 "impl_name": "ssl", 00:20:34.623 "recv_buf_size": 4096, 00:20:34.623 "send_buf_size": 4096, 00:20:34.623 "enable_recv_pipe": true, 00:20:34.623 "enable_quickack": false, 00:20:34.623 "enable_placement_id": 0, 00:20:34.623 "enable_zerocopy_send_server": true, 00:20:34.623 "enable_zerocopy_send_client": false, 00:20:34.623 "zerocopy_threshold": 0, 00:20:34.623 "tls_version": 0, 00:20:34.623 "enable_ktls": false 00:20:34.623 } 00:20:34.623 } 00:20:34.623 ] 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "subsystem": "vmd", 00:20:34.623 "config": [] 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "subsystem": "accel", 00:20:34.623 "config": [ 00:20:34.623 { 00:20:34.623 "method": "accel_set_options", 00:20:34.623 "params": { 00:20:34.623 "small_cache_size": 128, 00:20:34.623 "large_cache_size": 16, 00:20:34.623 "task_count": 2048, 00:20:34.623 "sequence_count": 2048, 00:20:34.623 "buf_count": 2048 00:20:34.623 } 00:20:34.623 } 00:20:34.623 ] 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "subsystem": "bdev", 00:20:34.623 "config": [ 00:20:34.623 { 00:20:34.623 "method": "bdev_set_options", 00:20:34.623 "params": { 00:20:34.623 "bdev_io_pool_size": 65535, 00:20:34.623 "bdev_io_cache_size": 256, 00:20:34.623 "bdev_auto_examine": true, 00:20:34.623 "iobuf_small_cache_size": 128, 00:20:34.623 "iobuf_large_cache_size": 16 00:20:34.623 } 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "method": "bdev_raid_set_options", 00:20:34.623 "params": { 00:20:34.623 "process_window_size_kb": 1024 00:20:34.623 } 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "method": "bdev_iscsi_set_options", 00:20:34.623 "params": { 00:20:34.623 "timeout_sec": 30 00:20:34.623 } 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "method": "bdev_nvme_set_options", 00:20:34.623 "params": { 00:20:34.623 "action_on_timeout": "none", 00:20:34.623 "timeout_us": 0, 00:20:34.623 "timeout_admin_us": 0, 00:20:34.623 "keep_alive_timeout_ms": 10000, 00:20:34.623 "transport_retry_count": 4, 00:20:34.623 "arbitration_burst": 0, 00:20:34.623 "low_priority_weight": 0, 00:20:34.623 "medium_priority_weight": 0, 00:20:34.623 "high_priority_weight": 0, 00:20:34.623 "nvme_adminq_poll_period_us": 10000, 00:20:34.623 "nvme_ioq_poll_period_us": 0, 00:20:34.623 "io_queue_requests": 512, 00:20:34.623 "delay_cmd_submit": true, 00:20:34.623 "bdev_retry_count": 3, 00:20:34.623 "transport_ack_timeout": 0, 00:20:34.623 "ctrlr_loss_timeout_sec": 0, 00:20:34.623 "reconnect_delay_sec": 0, 00:20:34.623 "fast_io_fail_timeout_sec": 0, 00:20:34.623 "generate_uuids": false, 00:20:34.623 "transport_tos": 0, 00:20:34.623 "io_path_stat": false, 00:20:34.623 "allow_accel_sequence": false 00:20:34.623 } 00:20:34.623 }, 00:20:34.623 { 00:20:34.623 "method": "bdev_nvme_attach_controller", 00:20:34.623 "params": { 00:20:34.623 "name": "TLSTEST", 00:20:34.623 "trtype": "TCP", 00:20:34.623 "adrfam": "IPv4", 00:20:34.623 "traddr": "10.0.0.2", 00:20:34.623 "trsvcid": "4420", 00:20:34.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.623 "prchk_reftag": false, 00:20:34.623 "prchk_guard": false, 00:20:34.624 "ctrlr_loss_timeout_sec": 0, 00:20:34.624 "reconnect_delay_sec": 0, 00:20:34.624 "fast_io_fail_timeout_sec": 0, 00:20:34.624 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:34.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.624 "hdgst": false, 00:20:34.624 "ddgst": false 00:20:34.624 } 00:20:34.624 }, 00:20:34.624 { 00:20:34.624 "method": "bdev_nvme_set_hotplug", 00:20:34.624 "params": { 00:20:34.624 "period_us": 100000, 00:20:34.624 "enable": false 00:20:34.624 } 00:20:34.624 }, 00:20:34.624 { 00:20:34.624 "method": "bdev_wait_for_examine" 00:20:34.624 } 00:20:34.624 ] 00:20:34.624 }, 00:20:34.624 { 00:20:34.624 "subsystem": "nbd", 00:20:34.624 "config": [] 00:20:34.624 } 00:20:34.624 ] 00:20:34.624 }' 00:20:34.624 [2024-06-08 21:16:12.526639] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:34.624 [2024-06-08 21:16:12.526692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407917 ] 00:20:34.624 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.624 [2024-06-08 21:16:12.576580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.624 [2024-06-08 21:16:12.627237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.887 [2024-06-08 21:16:12.742808] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.489 21:16:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.489 21:16:13 -- common/autotest_common.sh@852 -- # return 0 00:20:35.489 21:16:13 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.489 Running I/O for 10 seconds... 00:20:45.490 00:20:45.490 Latency(us) 00:20:45.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.490 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.490 Verification LBA range: start 0x0 length 0x2000 00:20:45.490 TLSTESTn1 : 10.06 1707.97 6.67 0.00 0.00 74784.15 8355.84 81264.64 00:20:45.490 =================================================================================================================== 00:20:45.490 Total : 1707.97 6.67 0.00 0.00 74784.15 8355.84 81264.64 00:20:45.490 0 00:20:45.490 21:16:23 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.490 21:16:23 -- target/tls.sh@223 -- # killprocess 2407917 00:20:45.490 21:16:23 -- common/autotest_common.sh@926 -- # '[' -z 2407917 ']' 00:20:45.490 21:16:23 -- common/autotest_common.sh@930 -- # kill -0 2407917 00:20:45.490 21:16:23 -- common/autotest_common.sh@931 -- # uname 00:20:45.490 21:16:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.490 21:16:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2407917 00:20:45.490 21:16:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:45.490 21:16:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:45.490 21:16:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2407917' 00:20:45.490 killing process with pid 2407917 00:20:45.490 21:16:23 -- common/autotest_common.sh@945 -- # kill 2407917 00:20:45.490 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.490 00:20:45.490 Latency(us) 00:20:45.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.490 =================================================================================================================== 00:20:45.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.490 21:16:23 -- common/autotest_common.sh@950 -- # wait 2407917 00:20:45.750 21:16:23 -- target/tls.sh@224 -- # killprocess 2407863 00:20:45.750 21:16:23 -- common/autotest_common.sh@926 -- # '[' -z 2407863 ']' 00:20:45.750 21:16:23 -- common/autotest_common.sh@930 -- # kill -0 2407863 00:20:45.750 21:16:23 -- common/autotest_common.sh@931 -- # uname 00:20:45.750 21:16:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.750 21:16:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2407863 00:20:45.750 21:16:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:45.750 21:16:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:45.750 21:16:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2407863' 00:20:45.750 killing process with pid 2407863 00:20:45.750 21:16:23 -- common/autotest_common.sh@945 -- # kill 2407863 00:20:45.750 21:16:23 -- common/autotest_common.sh@950 -- # wait 2407863 00:20:45.750 21:16:23 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:45.750 21:16:23 -- target/tls.sh@227 -- # cleanup 00:20:45.750 21:16:23 -- target/tls.sh@15 -- # process_shm --id 0 00:20:45.750 21:16:23 -- common/autotest_common.sh@796 -- # type=--id 00:20:45.750 21:16:23 -- common/autotest_common.sh@797 -- # id=0 00:20:45.750 21:16:23 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:45.750 21:16:23 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.750 21:16:23 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:45.750 21:16:23 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:45.750 21:16:23 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:45.750 21:16:23 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.750 nvmf_trace.0 00:20:46.010 21:16:23 -- common/autotest_common.sh@811 -- # return 0 00:20:46.010 21:16:23 -- target/tls.sh@16 -- # killprocess 2407917 00:20:46.010 21:16:23 -- common/autotest_common.sh@926 -- # '[' -z 2407917 ']' 00:20:46.010 21:16:23 -- common/autotest_common.sh@930 -- # kill -0 2407917 00:20:46.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2407917) - No such process 00:20:46.010 21:16:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2407917 is not found' 00:20:46.010 Process with pid 2407917 is not found 00:20:46.010 21:16:23 -- target/tls.sh@17 -- # nvmftestfini 00:20:46.010 21:16:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:46.010 21:16:23 -- nvmf/common.sh@116 -- # sync 00:20:46.010 21:16:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:46.010 21:16:23 -- nvmf/common.sh@119 -- # set +e 00:20:46.010 21:16:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:46.010 21:16:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:46.010 rmmod nvme_tcp 00:20:46.010 rmmod nvme_fabrics 00:20:46.010 rmmod nvme_keyring 00:20:46.010 21:16:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:46.010 21:16:23 -- nvmf/common.sh@123 -- # set -e 00:20:46.010 21:16:23 -- nvmf/common.sh@124 -- # return 0 00:20:46.010 21:16:23 -- nvmf/common.sh@477 -- # '[' -n 2407863 ']' 00:20:46.010 21:16:23 -- nvmf/common.sh@478 -- # killprocess 2407863 00:20:46.010 21:16:23 -- common/autotest_common.sh@926 -- # '[' -z 2407863 ']' 00:20:46.010 21:16:23 -- common/autotest_common.sh@930 -- # kill -0 2407863 00:20:46.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2407863) - No such process 00:20:46.010 21:16:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2407863 is not found' 00:20:46.010 Process with pid 2407863 is not found 00:20:46.010 21:16:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:46.010 21:16:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:46.010 21:16:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:46.010 21:16:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.010 21:16:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:46.010 21:16:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.010 21:16:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.010 21:16:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.556 21:16:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:48.556 21:16:26 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:48.556 00:20:48.556 real 1m11.765s 00:20:48.556 user 1m43.388s 00:20:48.556 sys 0m28.132s 00:20:48.556 21:16:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.556 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:20:48.556 ************************************ 00:20:48.556 END TEST nvmf_tls 00:20:48.556 ************************************ 00:20:48.556 21:16:26 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.556 21:16:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:48.556 21:16:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:48.556 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:20:48.556 ************************************ 00:20:48.556 START TEST nvmf_fips 00:20:48.556 ************************************ 00:20:48.556 21:16:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.556 * Looking for test storage... 00:20:48.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:48.556 21:16:26 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.556 21:16:26 -- nvmf/common.sh@7 -- # uname -s 00:20:48.556 21:16:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.556 21:16:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.556 21:16:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.556 21:16:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.556 21:16:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.556 21:16:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.556 21:16:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.556 21:16:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.556 21:16:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.556 21:16:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.556 21:16:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.556 21:16:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.556 21:16:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.556 21:16:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.556 21:16:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.556 21:16:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.556 21:16:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.556 21:16:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.556 21:16:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.556 21:16:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.556 21:16:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.557 21:16:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.557 21:16:26 -- paths/export.sh@5 -- # export PATH 00:20:48.557 21:16:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.557 21:16:26 -- nvmf/common.sh@46 -- # : 0 00:20:48.557 21:16:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:48.557 21:16:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:48.557 21:16:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:48.557 21:16:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.557 21:16:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.557 21:16:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:48.557 21:16:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:48.557 21:16:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:48.557 21:16:26 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:48.557 21:16:26 -- fips/fips.sh@89 -- # check_openssl_version 00:20:48.557 21:16:26 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:48.557 21:16:26 -- fips/fips.sh@85 -- # openssl version 00:20:48.557 21:16:26 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:48.557 21:16:26 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:48.557 21:16:26 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:48.557 21:16:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:48.557 21:16:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:48.557 21:16:26 -- scripts/common.sh@335 -- # IFS=.-: 00:20:48.557 21:16:26 -- scripts/common.sh@335 -- # read -ra ver1 00:20:48.557 21:16:26 -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.557 21:16:26 -- scripts/common.sh@336 -- # read -ra ver2 00:20:48.557 21:16:26 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:48.557 21:16:26 -- scripts/common.sh@339 -- # ver1_l=3 00:20:48.557 21:16:26 -- scripts/common.sh@340 -- # ver2_l=3 00:20:48.557 21:16:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:48.557 21:16:26 -- scripts/common.sh@343 -- # case "$op" in 00:20:48.557 21:16:26 -- scripts/common.sh@347 -- # : 1 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # decimal 3 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=3 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 3 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # decimal 3 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=3 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 3 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:48.557 21:16:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:48.557 21:16:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v++ )) 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # decimal 0 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=0 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 0 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # decimal 0 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=0 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 0 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:48.557 21:16:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:48.557 21:16:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v++ )) 00:20:48.557 21:16:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # decimal 9 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=9 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 9 00:20:48.557 21:16:26 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # decimal 0 00:20:48.557 21:16:26 -- scripts/common.sh@352 -- # local d=0 00:20:48.557 21:16:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.557 21:16:26 -- scripts/common.sh@354 -- # echo 0 00:20:48.557 21:16:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:48.557 21:16:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:48.557 21:16:26 -- scripts/common.sh@366 -- # return 0 00:20:48.557 21:16:26 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:48.557 21:16:26 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:48.557 21:16:26 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:48.557 21:16:26 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:48.557 21:16:26 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:48.557 21:16:26 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:48.557 21:16:26 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:48.557 21:16:26 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:48.557 21:16:26 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:48.557 21:16:26 -- fips/fips.sh@114 -- # build_openssl_config 00:20:48.557 21:16:26 -- fips/fips.sh@37 -- # cat 00:20:48.557 21:16:26 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:48.557 21:16:26 -- fips/fips.sh@58 -- # cat - 00:20:48.557 21:16:26 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:48.557 21:16:26 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:48.557 21:16:26 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:48.557 21:16:26 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:48.557 21:16:26 -- fips/fips.sh@117 -- # openssl list -providers 00:20:48.557 21:16:26 -- fips/fips.sh@117 -- # grep name 00:20:48.557 21:16:26 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:48.557 21:16:26 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:48.557 21:16:26 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:48.557 21:16:26 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:48.557 21:16:26 -- common/autotest_common.sh@640 -- # local es=0 00:20:48.557 21:16:26 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:48.557 21:16:26 -- fips/fips.sh@128 -- # : 00:20:48.557 21:16:26 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:48.557 21:16:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:48.557 21:16:26 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:48.557 21:16:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:48.557 21:16:26 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:48.557 21:16:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:48.557 21:16:26 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:48.557 21:16:26 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:48.557 21:16:26 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:48.557 Error setting digest 00:20:48.557 002292F47E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:48.557 002292F47E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:48.557 21:16:26 -- common/autotest_common.sh@643 -- # es=1 00:20:48.557 21:16:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:48.557 21:16:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:48.557 21:16:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:48.557 21:16:26 -- fips/fips.sh@131 -- # nvmftestinit 00:20:48.557 21:16:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:48.557 21:16:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.557 21:16:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:48.558 21:16:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:48.558 21:16:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:48.558 21:16:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.558 21:16:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.558 21:16:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.558 21:16:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:48.558 21:16:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:48.558 21:16:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:48.558 21:16:26 -- common/autotest_common.sh@10 -- # set +x 00:20:55.153 21:16:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.153 21:16:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:55.153 21:16:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:55.153 21:16:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:55.153 21:16:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:55.153 21:16:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:55.153 21:16:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:55.153 21:16:33 -- nvmf/common.sh@294 -- # net_devs=() 00:20:55.153 21:16:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:55.153 21:16:33 -- nvmf/common.sh@295 -- # e810=() 00:20:55.153 21:16:33 -- nvmf/common.sh@295 -- # local -ga e810 00:20:55.153 21:16:33 -- nvmf/common.sh@296 -- # x722=() 00:20:55.153 21:16:33 -- nvmf/common.sh@296 -- # local -ga x722 00:20:55.153 21:16:33 -- nvmf/common.sh@297 -- # mlx=() 00:20:55.153 21:16:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:55.153 21:16:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.153 21:16:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.154 21:16:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.154 21:16:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:55.154 21:16:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:55.154 21:16:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.154 21:16:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:55.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:55.154 21:16:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.154 21:16:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:55.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:55.154 21:16:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.154 21:16:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.154 21:16:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.154 21:16:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:55.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:55.154 21:16:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.154 21:16:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.154 21:16:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.154 21:16:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.154 21:16:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:55.154 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:55.154 21:16:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.154 21:16:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:55.154 21:16:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:55.154 21:16:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:55.154 21:16:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.154 21:16:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.154 21:16:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.154 21:16:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:55.154 21:16:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.154 21:16:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.154 21:16:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:55.154 21:16:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.154 21:16:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.154 21:16:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:55.154 21:16:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:55.154 21:16:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.154 21:16:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.417 21:16:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.417 21:16:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.417 21:16:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:55.417 21:16:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.417 21:16:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.417 21:16:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.417 21:16:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:55.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:20:55.417 00:20:55.417 --- 10.0.0.2 ping statistics --- 00:20:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.417 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:20:55.417 21:16:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.454 ms 00:20:55.417 00:20:55.417 --- 10.0.0.1 ping statistics --- 00:20:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.417 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:20:55.417 21:16:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.417 21:16:33 -- nvmf/common.sh@410 -- # return 0 00:20:55.417 21:16:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.417 21:16:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.417 21:16:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:55.417 21:16:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:55.417 21:16:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.417 21:16:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:55.417 21:16:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:55.679 21:16:33 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:55.679 21:16:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:55.679 21:16:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:55.679 21:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:55.679 21:16:33 -- nvmf/common.sh@469 -- # nvmfpid=2414351 00:20:55.679 21:16:33 -- nvmf/common.sh@470 -- # waitforlisten 2414351 00:20:55.679 21:16:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.679 21:16:33 -- common/autotest_common.sh@819 -- # '[' -z 2414351 ']' 00:20:55.679 21:16:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.679 21:16:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.679 21:16:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.679 21:16:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.679 21:16:33 -- common/autotest_common.sh@10 -- # set +x 00:20:55.679 [2024-06-08 21:16:33.617464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:55.679 [2024-06-08 21:16:33.617538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.679 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.679 [2024-06-08 21:16:33.705288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.940 [2024-06-08 21:16:33.795694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:55.940 [2024-06-08 21:16:33.795857] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.940 [2024-06-08 21:16:33.795866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.940 [2024-06-08 21:16:33.795881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.940 [2024-06-08 21:16:33.795913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.513 21:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.513 21:16:34 -- common/autotest_common.sh@852 -- # return 0 00:20:56.513 21:16:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:56.513 21:16:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.513 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:20:56.513 21:16:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.513 21:16:34 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:56.513 21:16:34 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.513 21:16:34 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.513 21:16:34 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.513 21:16:34 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.513 21:16:34 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.513 21:16:34 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:56.513 21:16:34 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.513 [2024-06-08 21:16:34.567707] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.513 [2024-06-08 21:16:34.583705] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.513 [2024-06-08 21:16:34.583990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.774 malloc0 00:20:56.774 21:16:34 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.774 21:16:34 -- fips/fips.sh@148 -- # bdevperf_pid=2414666 00:20:56.774 21:16:34 -- fips/fips.sh@149 -- # waitforlisten 2414666 /var/tmp/bdevperf.sock 00:20:56.774 21:16:34 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.774 21:16:34 -- common/autotest_common.sh@819 -- # '[' -z 2414666 ']' 00:20:56.774 21:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.774 21:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:56.774 21:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.774 21:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:56.774 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:20:56.774 [2024-06-08 21:16:34.709957] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:56.774 [2024-06-08 21:16:34.710031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414666 ] 00:20:56.774 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.774 [2024-06-08 21:16:34.766700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.774 [2024-06-08 21:16:34.828079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.717 21:16:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:57.718 21:16:35 -- common/autotest_common.sh@852 -- # return 0 00:20:57.718 21:16:35 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:57.718 [2024-06-08 21:16:35.599453] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.718 TLSTESTn1 00:20:57.718 21:16:35 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.718 Running I/O for 10 seconds... 00:21:09.952 00:21:09.952 Latency(us) 00:21:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.952 Verification LBA range: start 0x0 length 0x2000 00:21:09.952 TLSTESTn1 : 10.06 1732.60 6.77 0.00 0.00 73715.57 10267.31 85633.71 00:21:09.952 =================================================================================================================== 00:21:09.952 Total : 1732.60 6.77 0.00 0.00 73715.57 10267.31 85633.71 00:21:09.952 0 00:21:09.952 21:16:45 -- fips/fips.sh@1 -- # cleanup 00:21:09.952 21:16:45 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:09.952 21:16:45 -- common/autotest_common.sh@796 -- # type=--id 00:21:09.952 21:16:45 -- common/autotest_common.sh@797 -- # id=0 00:21:09.952 21:16:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:09.952 21:16:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.952 21:16:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:09.952 21:16:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:09.952 21:16:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:09.952 21:16:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.952 nvmf_trace.0 00:21:09.952 21:16:45 -- common/autotest_common.sh@811 -- # return 0 00:21:09.952 21:16:45 -- fips/fips.sh@16 -- # killprocess 2414666 00:21:09.952 21:16:45 -- common/autotest_common.sh@926 -- # '[' -z 2414666 ']' 00:21:09.952 21:16:45 -- common/autotest_common.sh@930 -- # kill -0 2414666 00:21:09.952 21:16:45 -- common/autotest_common.sh@931 -- # uname 00:21:09.952 21:16:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.952 21:16:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2414666 00:21:09.952 21:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:09.952 21:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:09.952 21:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2414666' 00:21:09.952 killing process with pid 2414666 00:21:09.952 21:16:46 -- common/autotest_common.sh@945 -- # kill 2414666 00:21:09.952 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.952 00:21:09.952 Latency(us) 00:21:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.952 =================================================================================================================== 00:21:09.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.952 21:16:46 -- common/autotest_common.sh@950 -- # wait 2414666 00:21:09.952 21:16:46 -- fips/fips.sh@17 -- # nvmftestfini 00:21:09.952 21:16:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:09.952 21:16:46 -- nvmf/common.sh@116 -- # sync 00:21:09.952 21:16:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:09.952 21:16:46 -- nvmf/common.sh@119 -- # set +e 00:21:09.952 21:16:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:09.952 21:16:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:09.952 rmmod nvme_tcp 00:21:09.952 rmmod nvme_fabrics 00:21:09.952 rmmod nvme_keyring 00:21:09.952 21:16:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:09.952 21:16:46 -- nvmf/common.sh@123 -- # set -e 00:21:09.952 21:16:46 -- nvmf/common.sh@124 -- # return 0 00:21:09.952 21:16:46 -- nvmf/common.sh@477 -- # '[' -n 2414351 ']' 00:21:09.952 21:16:46 -- nvmf/common.sh@478 -- # killprocess 2414351 00:21:09.952 21:16:46 -- common/autotest_common.sh@926 -- # '[' -z 2414351 ']' 00:21:09.952 21:16:46 -- common/autotest_common.sh@930 -- # kill -0 2414351 00:21:09.952 21:16:46 -- common/autotest_common.sh@931 -- # uname 00:21:09.952 21:16:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.952 21:16:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2414351 00:21:09.952 21:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:09.953 21:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:09.953 21:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2414351' 00:21:09.953 killing process with pid 2414351 00:21:09.953 21:16:46 -- common/autotest_common.sh@945 -- # kill 2414351 00:21:09.953 21:16:46 -- common/autotest_common.sh@950 -- # wait 2414351 00:21:09.953 21:16:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:09.953 21:16:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:09.953 21:16:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:09.953 21:16:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.953 21:16:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:09.953 21:16:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.953 21:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.953 21:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.568 21:16:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:10.568 21:16:48 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:10.568 00:21:10.568 real 0m22.366s 00:21:10.568 user 0m22.051s 00:21:10.568 sys 0m10.841s 00:21:10.568 21:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.568 21:16:48 -- common/autotest_common.sh@10 -- # set +x 00:21:10.568 ************************************ 00:21:10.568 END TEST nvmf_fips 00:21:10.568 ************************************ 00:21:10.568 21:16:48 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:10.568 21:16:48 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:10.568 21:16:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:10.568 21:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.568 21:16:48 -- common/autotest_common.sh@10 -- # set +x 00:21:10.568 ************************************ 00:21:10.568 START TEST nvmf_fuzz 00:21:10.568 ************************************ 00:21:10.568 21:16:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:10.568 * Looking for test storage... 00:21:10.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.568 21:16:48 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.568 21:16:48 -- nvmf/common.sh@7 -- # uname -s 00:21:10.568 21:16:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.568 21:16:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.568 21:16:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.568 21:16:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.568 21:16:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.568 21:16:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.568 21:16:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.568 21:16:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.568 21:16:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.568 21:16:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.568 21:16:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.568 21:16:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.568 21:16:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.568 21:16:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.568 21:16:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.568 21:16:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.569 21:16:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.569 21:16:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.569 21:16:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.569 21:16:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.569 21:16:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.569 21:16:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.569 21:16:48 -- paths/export.sh@5 -- # export PATH 00:21:10.569 21:16:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.569 21:16:48 -- nvmf/common.sh@46 -- # : 0 00:21:10.569 21:16:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:10.569 21:16:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:10.569 21:16:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:10.569 21:16:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.569 21:16:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.569 21:16:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:10.569 21:16:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:10.569 21:16:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:10.569 21:16:48 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:10.569 21:16:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:10.569 21:16:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.569 21:16:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:10.569 21:16:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:10.569 21:16:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:10.569 21:16:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.569 21:16:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.569 21:16:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.569 21:16:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:10.569 21:16:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:10.569 21:16:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:10.569 21:16:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.710 21:16:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.710 21:16:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:18.710 21:16:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:18.710 21:16:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:18.710 21:16:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:18.710 21:16:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:18.710 21:16:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:18.710 21:16:55 -- nvmf/common.sh@294 -- # net_devs=() 00:21:18.710 21:16:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:18.710 21:16:55 -- nvmf/common.sh@295 -- # e810=() 00:21:18.710 21:16:55 -- nvmf/common.sh@295 -- # local -ga e810 00:21:18.710 21:16:55 -- nvmf/common.sh@296 -- # x722=() 00:21:18.710 21:16:55 -- nvmf/common.sh@296 -- # local -ga x722 00:21:18.710 21:16:55 -- nvmf/common.sh@297 -- # mlx=() 00:21:18.710 21:16:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:18.710 21:16:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.710 21:16:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:18.710 21:16:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:18.710 21:16:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:18.710 21:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.710 21:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:18.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:18.710 21:16:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.710 21:16:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.711 21:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:18.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:18.711 21:16:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:18.711 21:16:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.711 21:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.711 21:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.711 21:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.711 21:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:18.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:18.711 21:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.711 21:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.711 21:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.711 21:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.711 21:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.711 21:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:18.711 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:18.711 21:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.711 21:16:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:18.711 21:16:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:18.711 21:16:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:18.711 21:16:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.711 21:16:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.711 21:16:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.711 21:16:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:18.711 21:16:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.711 21:16:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.711 21:16:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:18.711 21:16:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.711 21:16:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.711 21:16:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:18.711 21:16:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:18.711 21:16:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.711 21:16:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.711 21:16:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.711 21:16:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.711 21:16:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:18.711 21:16:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.711 21:16:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.711 21:16:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.711 21:16:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:18.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:21:18.711 00:21:18.711 --- 10.0.0.2 ping statistics --- 00:21:18.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.711 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:21:18.711 21:16:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:21:18.711 00:21:18.711 --- 10.0.0.1 ping statistics --- 00:21:18.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.711 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:21:18.711 21:16:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.711 21:16:55 -- nvmf/common.sh@410 -- # return 0 00:21:18.711 21:16:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:18.711 21:16:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.711 21:16:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:18.711 21:16:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.711 21:16:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:18.711 21:16:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:18.711 21:16:55 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2421040 00:21:18.711 21:16:55 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:18.711 21:16:55 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:18.711 21:16:55 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2421040 00:21:18.711 21:16:55 -- common/autotest_common.sh@819 -- # '[' -z 2421040 ']' 00:21:18.711 21:16:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.711 21:16:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.711 21:16:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.711 21:16:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.711 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 21:16:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.711 21:16:56 -- common/autotest_common.sh@852 -- # return 0 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.711 21:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.711 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 21:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:18.711 21:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.711 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 Malloc0 00:21:18.711 21:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.711 21:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.711 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 21:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.711 21:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.711 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 21:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.711 21:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.711 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:18.711 21:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:18.711 21:16:56 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:50.826 Fuzzing completed. Shutting down the fuzz application 00:21:50.826 00:21:50.826 Dumping successful admin opcodes: 00:21:50.826 8, 9, 10, 24, 00:21:50.826 Dumping successful io opcodes: 00:21:50.826 0, 9, 00:21:50.826 NS: 0x200003aeff00 I/O qp, Total commands completed: 972128, total successful commands: 5688, random_seed: 1936947776 00:21:50.826 NS: 0x200003aeff00 admin qp, Total commands completed: 123112, total successful commands: 1009, random_seed: 527714944 00:21:50.826 21:17:27 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:50.826 Fuzzing completed. Shutting down the fuzz application 00:21:50.826 00:21:50.826 Dumping successful admin opcodes: 00:21:50.826 24, 00:21:50.826 Dumping successful io opcodes: 00:21:50.826 00:21:50.826 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 808712781 00:21:50.826 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 808797631 00:21:50.826 21:17:28 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.826 21:17:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.826 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:21:50.826 21:17:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.826 21:17:28 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:50.826 21:17:28 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:50.826 21:17:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:50.826 21:17:28 -- nvmf/common.sh@116 -- # sync 00:21:50.826 21:17:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:50.826 21:17:28 -- nvmf/common.sh@119 -- # set +e 00:21:50.826 21:17:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:50.826 21:17:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:50.826 rmmod nvme_tcp 00:21:50.826 rmmod nvme_fabrics 00:21:50.826 rmmod nvme_keyring 00:21:50.826 21:17:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:50.826 21:17:28 -- nvmf/common.sh@123 -- # set -e 00:21:50.826 21:17:28 -- nvmf/common.sh@124 -- # return 0 00:21:50.826 21:17:28 -- nvmf/common.sh@477 -- # '[' -n 2421040 ']' 00:21:50.826 21:17:28 -- nvmf/common.sh@478 -- # killprocess 2421040 00:21:50.826 21:17:28 -- common/autotest_common.sh@926 -- # '[' -z 2421040 ']' 00:21:50.826 21:17:28 -- common/autotest_common.sh@930 -- # kill -0 2421040 00:21:50.826 21:17:28 -- common/autotest_common.sh@931 -- # uname 00:21:50.826 21:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:50.826 21:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2421040 00:21:50.826 21:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:50.826 21:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:50.826 21:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2421040' 00:21:50.826 killing process with pid 2421040 00:21:50.826 21:17:28 -- common/autotest_common.sh@945 -- # kill 2421040 00:21:50.826 21:17:28 -- common/autotest_common.sh@950 -- # wait 2421040 00:21:50.826 21:17:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:50.826 21:17:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:50.826 21:17:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:50.826 21:17:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.826 21:17:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:50.826 21:17:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.826 21:17:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.826 21:17:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.741 21:17:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:52.741 21:17:30 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:52.741 00:21:52.741 real 0m42.145s 00:21:52.741 user 0m55.421s 00:21:52.741 sys 0m16.109s 00:21:52.741 21:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.741 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:52.741 ************************************ 00:21:52.741 END TEST nvmf_fuzz 00:21:52.741 ************************************ 00:21:52.741 21:17:30 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:52.741 21:17:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:52.741 21:17:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:52.741 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:21:52.741 ************************************ 00:21:52.741 START TEST nvmf_multiconnection 00:21:52.741 ************************************ 00:21:52.741 21:17:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:52.741 * Looking for test storage... 00:21:52.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.741 21:17:30 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.741 21:17:30 -- nvmf/common.sh@7 -- # uname -s 00:21:52.741 21:17:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.741 21:17:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.741 21:17:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.741 21:17:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.741 21:17:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.741 21:17:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.741 21:17:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.741 21:17:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.741 21:17:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.741 21:17:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.741 21:17:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.741 21:17:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.741 21:17:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.741 21:17:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.741 21:17:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.741 21:17:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.742 21:17:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.742 21:17:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.742 21:17:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.742 21:17:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.742 21:17:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.742 21:17:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.742 21:17:30 -- paths/export.sh@5 -- # export PATH 00:21:52.742 21:17:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.742 21:17:30 -- nvmf/common.sh@46 -- # : 0 00:21:52.742 21:17:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:52.742 21:17:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:52.742 21:17:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:52.742 21:17:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.742 21:17:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.742 21:17:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:52.742 21:17:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:52.742 21:17:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:52.742 21:17:30 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.742 21:17:30 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.742 21:17:30 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:52.742 21:17:30 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:52.742 21:17:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:52.742 21:17:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.742 21:17:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:52.742 21:17:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:52.742 21:17:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:52.742 21:17:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.742 21:17:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.742 21:17:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.742 21:17:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:52.742 21:17:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:52.742 21:17:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:52.742 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.886 21:17:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:00.886 21:17:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:00.886 21:17:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:00.886 21:17:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:00.886 21:17:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:00.886 21:17:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:00.886 21:17:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:00.886 21:17:37 -- nvmf/common.sh@294 -- # net_devs=() 00:22:00.886 21:17:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:00.886 21:17:37 -- nvmf/common.sh@295 -- # e810=() 00:22:00.886 21:17:37 -- nvmf/common.sh@295 -- # local -ga e810 00:22:00.886 21:17:37 -- nvmf/common.sh@296 -- # x722=() 00:22:00.886 21:17:37 -- nvmf/common.sh@296 -- # local -ga x722 00:22:00.886 21:17:37 -- nvmf/common.sh@297 -- # mlx=() 00:22:00.886 21:17:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:00.886 21:17:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.886 21:17:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:00.886 21:17:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:00.886 21:17:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:00.886 21:17:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:00.886 21:17:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:00.886 21:17:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:00.886 21:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.886 21:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:00.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:00.886 21:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.886 21:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.886 21:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.887 21:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:00.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:00.887 21:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:00.887 21:17:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.887 21:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.887 21:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.887 21:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.887 21:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:00.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:00.887 21:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.887 21:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.887 21:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.887 21:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.887 21:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.887 21:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:00.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:00.887 21:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.887 21:17:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:00.887 21:17:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:00.887 21:17:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:00.887 21:17:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.887 21:17:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.887 21:17:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.887 21:17:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:00.887 21:17:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.887 21:17:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.887 21:17:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:00.887 21:17:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.887 21:17:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.887 21:17:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:00.887 21:17:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:00.887 21:17:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.887 21:17:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.887 21:17:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.887 21:17:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.887 21:17:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:00.887 21:17:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.887 21:17:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.887 21:17:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.887 21:17:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:00.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:22:00.887 00:22:00.887 --- 10.0.0.2 ping statistics --- 00:22:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.887 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:22:00.887 21:17:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:22:00.887 00:22:00.887 --- 10.0.0.1 ping statistics --- 00:22:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.887 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:22:00.887 21:17:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.887 21:17:37 -- nvmf/common.sh@410 -- # return 0 00:22:00.887 21:17:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.887 21:17:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.887 21:17:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.887 21:17:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.887 21:17:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.887 21:17:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.887 21:17:37 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:00.887 21:17:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.887 21:17:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.887 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 21:17:37 -- nvmf/common.sh@469 -- # nvmfpid=2431467 00:22:00.887 21:17:37 -- nvmf/common.sh@470 -- # waitforlisten 2431467 00:22:00.887 21:17:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.887 21:17:37 -- common/autotest_common.sh@819 -- # '[' -z 2431467 ']' 00:22:00.887 21:17:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.887 21:17:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.887 21:17:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.887 21:17:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.887 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 [2024-06-08 21:17:37.871861] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:00.887 [2024-06-08 21:17:37.871925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.887 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.887 [2024-06-08 21:17:37.942639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.887 [2024-06-08 21:17:38.017687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.887 [2024-06-08 21:17:38.017823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.887 [2024-06-08 21:17:38.017833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.887 [2024-06-08 21:17:38.017841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.887 [2024-06-08 21:17:38.017979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.887 [2024-06-08 21:17:38.018094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.887 [2024-06-08 21:17:38.018251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.887 [2024-06-08 21:17:38.018251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.887 21:17:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.887 21:17:38 -- common/autotest_common.sh@852 -- # return 0 00:22:00.887 21:17:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.887 21:17:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 21:17:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.887 21:17:38 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.887 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 [2024-06-08 21:17:38.688577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.887 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.887 21:17:38 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:00.887 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.887 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:00.887 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 Malloc1 00:22:00.887 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.887 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:00.887 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.887 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:00.887 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.887 21:17:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.887 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.887 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 [2024-06-08 21:17:38.751890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.888 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 Malloc2 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.888 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 Malloc3 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.888 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 Malloc4 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.888 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 Malloc5 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.888 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.888 21:17:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.888 21:17:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:00.888 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.888 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 Malloc6 00:22:01.149 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:01.149 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:01.149 21:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.149 21:17:39 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 Malloc7 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.149 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.149 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.149 21:17:39 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.149 21:17:39 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:01.149 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 Malloc8 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.150 21:17:39 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 Malloc9 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.150 21:17:39 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 Malloc10 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.150 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.150 21:17:39 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.150 21:17:39 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:01.150 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.150 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.411 Malloc11 00:22:01.411 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.411 21:17:39 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:01.411 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.411 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.411 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.411 21:17:39 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:01.411 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.411 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.411 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.411 21:17:39 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:01.411 21:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.411 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:22:01.411 21:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.411 21:17:39 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:01.411 21:17:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.411 21:17:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:02.800 21:17:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:02.800 21:17:40 -- common/autotest_common.sh@1177 -- # local i=0 00:22:02.800 21:17:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.800 21:17:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:02.800 21:17:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:05.345 21:17:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:05.345 21:17:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:05.345 21:17:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:05.345 21:17:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:05.345 21:17:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:05.345 21:17:42 -- common/autotest_common.sh@1187 -- # return 0 00:22:05.345 21:17:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:05.345 21:17:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:06.729 21:17:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:06.729 21:17:44 -- common/autotest_common.sh@1177 -- # local i=0 00:22:06.729 21:17:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.729 21:17:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:06.729 21:17:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.643 21:17:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.643 21:17:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.643 21:17:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:08.643 21:17:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.643 21:17:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.643 21:17:46 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.643 21:17:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.643 21:17:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:10.027 21:17:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:10.027 21:17:48 -- common/autotest_common.sh@1177 -- # local i=0 00:22:10.027 21:17:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.027 21:17:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:10.027 21:17:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:12.573 21:17:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:12.573 21:17:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:12.573 21:17:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:12.573 21:17:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:12.573 21:17:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.573 21:17:50 -- common/autotest_common.sh@1187 -- # return 0 00:22:12.573 21:17:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.573 21:17:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:13.957 21:17:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:13.957 21:17:51 -- common/autotest_common.sh@1177 -- # local i=0 00:22:13.957 21:17:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.957 21:17:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:13.957 21:17:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:15.869 21:17:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:15.869 21:17:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:15.869 21:17:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:15.869 21:17:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:15.869 21:17:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.869 21:17:53 -- common/autotest_common.sh@1187 -- # return 0 00:22:15.869 21:17:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.869 21:17:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:17.781 21:17:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:17.781 21:17:55 -- common/autotest_common.sh@1177 -- # local i=0 00:22:17.781 21:17:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.781 21:17:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:17.781 21:17:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:19.695 21:17:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:19.695 21:17:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:19.695 21:17:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:19.695 21:17:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:19.695 21:17:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:19.695 21:17:57 -- common/autotest_common.sh@1187 -- # return 0 00:22:19.695 21:17:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.695 21:17:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:21.080 21:17:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:21.080 21:17:59 -- common/autotest_common.sh@1177 -- # local i=0 00:22:21.080 21:17:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.080 21:17:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:21.080 21:17:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:23.627 21:18:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:23.627 21:18:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:23.627 21:18:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:23.627 21:18:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:23.627 21:18:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.627 21:18:01 -- common/autotest_common.sh@1187 -- # return 0 00:22:23.627 21:18:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.627 21:18:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:25.052 21:18:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:25.053 21:18:02 -- common/autotest_common.sh@1177 -- # local i=0 00:22:25.053 21:18:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.053 21:18:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:25.053 21:18:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:26.965 21:18:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:26.965 21:18:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:26.965 21:18:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:26.965 21:18:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:26.965 21:18:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.965 21:18:04 -- common/autotest_common.sh@1187 -- # return 0 00:22:26.965 21:18:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.965 21:18:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:28.879 21:18:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:28.879 21:18:06 -- common/autotest_common.sh@1177 -- # local i=0 00:22:28.879 21:18:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.879 21:18:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:28.879 21:18:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:30.794 21:18:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:30.794 21:18:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:30.794 21:18:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:30.794 21:18:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:30.794 21:18:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.794 21:18:08 -- common/autotest_common.sh@1187 -- # return 0 00:22:30.794 21:18:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.794 21:18:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:32.707 21:18:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:32.707 21:18:10 -- common/autotest_common.sh@1177 -- # local i=0 00:22:32.707 21:18:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.707 21:18:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:32.707 21:18:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:34.618 21:18:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:34.618 21:18:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:34.618 21:18:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:34.618 21:18:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:34.618 21:18:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:34.618 21:18:12 -- common/autotest_common.sh@1187 -- # return 0 00:22:34.618 21:18:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.618 21:18:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:36.529 21:18:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:36.529 21:18:14 -- common/autotest_common.sh@1177 -- # local i=0 00:22:36.529 21:18:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:36.529 21:18:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:36.529 21:18:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:38.441 21:18:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:38.441 21:18:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:38.441 21:18:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:38.441 21:18:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:38.441 21:18:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:38.441 21:18:16 -- common/autotest_common.sh@1187 -- # return 0 00:22:38.441 21:18:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.441 21:18:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:40.352 21:18:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:40.352 21:18:18 -- common/autotest_common.sh@1177 -- # local i=0 00:22:40.352 21:18:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:40.352 21:18:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:40.352 21:18:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:42.268 21:18:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:42.268 21:18:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:42.268 21:18:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:42.268 21:18:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:42.268 21:18:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.268 21:18:20 -- common/autotest_common.sh@1187 -- # return 0 00:22:42.268 21:18:20 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:42.268 [global] 00:22:42.268 thread=1 00:22:42.268 invalidate=1 00:22:42.268 rw=read 00:22:42.268 time_based=1 00:22:42.268 runtime=10 00:22:42.268 ioengine=libaio 00:22:42.268 direct=1 00:22:42.268 bs=262144 00:22:42.268 iodepth=64 00:22:42.268 norandommap=1 00:22:42.268 numjobs=1 00:22:42.268 00:22:42.268 [job0] 00:22:42.268 filename=/dev/nvme0n1 00:22:42.268 [job1] 00:22:42.268 filename=/dev/nvme10n1 00:22:42.268 [job2] 00:22:42.268 filename=/dev/nvme1n1 00:22:42.268 [job3] 00:22:42.268 filename=/dev/nvme2n1 00:22:42.268 [job4] 00:22:42.268 filename=/dev/nvme3n1 00:22:42.268 [job5] 00:22:42.268 filename=/dev/nvme4n1 00:22:42.268 [job6] 00:22:42.268 filename=/dev/nvme5n1 00:22:42.268 [job7] 00:22:42.268 filename=/dev/nvme6n1 00:22:42.268 [job8] 00:22:42.268 filename=/dev/nvme7n1 00:22:42.268 [job9] 00:22:42.268 filename=/dev/nvme8n1 00:22:42.268 [job10] 00:22:42.268 filename=/dev/nvme9n1 00:22:42.554 Could not set queue depth (nvme0n1) 00:22:42.554 Could not set queue depth (nvme10n1) 00:22:42.554 Could not set queue depth (nvme1n1) 00:22:42.554 Could not set queue depth (nvme2n1) 00:22:42.554 Could not set queue depth (nvme3n1) 00:22:42.554 Could not set queue depth (nvme4n1) 00:22:42.554 Could not set queue depth (nvme5n1) 00:22:42.554 Could not set queue depth (nvme6n1) 00:22:42.554 Could not set queue depth (nvme7n1) 00:22:42.554 Could not set queue depth (nvme8n1) 00:22:42.554 Could not set queue depth (nvme9n1) 00:22:42.820 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.820 fio-3.35 00:22:42.820 Starting 11 threads 00:22:55.100 00:22:55.100 job0: (groupid=0, jobs=1): err= 0: pid=2440878: Sat Jun 8 21:18:31 2024 00:22:55.100 read: IOPS=1331, BW=333MiB/s (349MB/s)(3356MiB/10081msec) 00:22:55.100 slat (usec): min=6, max=85209, avg=709.79, stdev=2224.41 00:22:55.100 clat (msec): min=4, max=169, avg=47.28, stdev=22.30 00:22:55.100 lat (msec): min=4, max=196, avg=47.99, stdev=22.58 00:22:55.100 clat percentiles (msec): 00:22:55.100 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:22:55.100 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 42], 00:22:55.100 | 70.00th=[ 49], 80.00th=[ 56], 90.00th=[ 84], 95.00th=[ 100], 00:22:55.100 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 163], 00:22:55.100 | 99.99th=[ 169] 00:22:55.100 bw ( KiB/s): min=174080, max=489984, per=14.56%, avg=342034.15, stdev=101539.62, samples=20 00:22:55.100 iops : min= 680, max= 1914, avg=1336.05, stdev=396.64, samples=20 00:22:55.100 lat (msec) : 10=0.19%, 20=1.02%, 50=71.73%, 100=22.40%, 250=4.66% 00:22:55.101 cpu : usr=0.41%, sys=4.12%, ctx=2806, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=13425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job1: (groupid=0, jobs=1): err= 0: pid=2440879: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=1047, BW=262MiB/s (275MB/s)(2622MiB/10013msec) 00:22:55.101 slat (usec): min=6, max=79033, avg=855.74, stdev=2816.25 00:22:55.101 clat (usec): min=1588, max=177040, avg=60195.91, stdev=33333.10 00:22:55.101 lat (usec): min=1619, max=177079, avg=61051.66, stdev=33720.92 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 31], 00:22:55.101 | 30.00th=[ 33], 40.00th=[ 42], 50.00th=[ 57], 60.00th=[ 67], 00:22:55.101 | 70.00th=[ 78], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 122], 00:22:55.101 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 167], 99.95th=[ 167], 00:22:55.101 | 99.99th=[ 169] 00:22:55.101 bw ( KiB/s): min=127744, max=485376, per=11.36%, avg=266871.35, stdev=98214.84, samples=20 00:22:55.101 iops : min= 499, max= 1896, avg=1042.40, stdev=383.71, samples=20 00:22:55.101 lat (msec) : 2=0.05%, 4=0.19%, 10=1.75%, 20=4.73%, 50=38.62% 00:22:55.101 lat (msec) : 100=41.41%, 250=13.26% 00:22:55.101 cpu : usr=0.35%, sys=3.35%, ctx=2184, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=10486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job2: (groupid=0, jobs=1): err= 0: pid=2440880: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=746, BW=187MiB/s (196MB/s)(1883MiB/10085msec) 00:22:55.101 slat (usec): min=9, max=50086, avg=1266.12, stdev=3342.82 00:22:55.101 clat (msec): min=24, max=174, avg=84.29, stdev=22.10 00:22:55.101 lat (msec): min=24, max=181, avg=85.56, stdev=22.39 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 64], 00:22:55.101 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 90], 00:22:55.101 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 121], 00:22:55.101 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 176], 00:22:55.101 | 99.99th=[ 176] 00:22:55.101 bw ( KiB/s): min=128000, max=263168, per=8.14%, avg=191142.20, stdev=40015.84, samples=20 00:22:55.101 iops : min= 500, max= 1028, avg=746.55, stdev=156.43, samples=20 00:22:55.101 lat (msec) : 50=3.24%, 100=70.09%, 250=26.67% 00:22:55.101 cpu : usr=0.30%, sys=2.92%, ctx=1686, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=7530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job3: (groupid=0, jobs=1): err= 0: pid=2440881: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=675, BW=169MiB/s (177MB/s)(1698MiB/10047msec) 00:22:55.101 slat (usec): min=7, max=81145, avg=1369.05, stdev=4048.33 00:22:55.101 clat (msec): min=13, max=186, avg=93.16, stdev=27.35 00:22:55.101 lat (msec): min=13, max=186, avg=94.52, stdev=27.82 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 65], 00:22:55.101 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 100], 60.00th=[ 104], 00:22:55.101 | 70.00th=[ 109], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 131], 00:22:55.101 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 182], 00:22:55.101 | 99.99th=[ 186] 00:22:55.101 bw ( KiB/s): min=128512, max=308841, per=7.34%, avg=172295.10, stdev=50017.15, samples=20 00:22:55.101 iops : min= 502, max= 1206, avg=672.90, stdev=195.38, samples=20 00:22:55.101 lat (msec) : 20=0.21%, 50=9.28%, 100=43.16%, 250=47.36% 00:22:55.101 cpu : usr=0.34%, sys=2.41%, ctx=1625, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=6791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job4: (groupid=0, jobs=1): err= 0: pid=2440882: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=928, BW=232MiB/s (243MB/s)(2336MiB/10066msec) 00:22:55.101 slat (usec): min=6, max=83479, avg=859.02, stdev=2823.43 00:22:55.101 clat (msec): min=4, max=147, avg=68.03, stdev=25.57 00:22:55.101 lat (msec): min=4, max=218, avg=68.89, stdev=25.98 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 45], 00:22:55.101 | 30.00th=[ 53], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 79], 00:22:55.101 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 105], 00:22:55.101 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:22:55.101 | 99.99th=[ 148] 00:22:55.101 bw ( KiB/s): min=168448, max=370176, per=10.11%, avg=237542.40, stdev=57993.91, samples=20 00:22:55.101 iops : min= 658, max= 1446, avg=927.90, stdev=226.54, samples=20 00:22:55.101 lat (msec) : 10=0.63%, 20=3.65%, 50=22.51%, 100=66.04%, 250=7.17% 00:22:55.101 cpu : usr=0.40%, sys=2.86%, ctx=2324, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=9342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job5: (groupid=0, jobs=1): err= 0: pid=2440883: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=741, BW=185MiB/s (194MB/s)(1872MiB/10090msec) 00:22:55.101 slat (usec): min=7, max=59663, avg=1231.06, stdev=3385.10 00:22:55.101 clat (msec): min=6, max=190, avg=84.88, stdev=23.49 00:22:55.101 lat (msec): min=6, max=190, avg=86.11, stdev=23.79 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 61], 00:22:55.101 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 94], 00:22:55.101 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 117], 00:22:55.101 | 99.00th=[ 140], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:22:55.101 | 99.99th=[ 190] 00:22:55.101 bw ( KiB/s): min=120832, max=261620, per=8.09%, avg=190032.30, stdev=41257.25, samples=20 00:22:55.101 iops : min= 472, max= 1021, avg=742.20, stdev=161.09, samples=20 00:22:55.101 lat (msec) : 10=0.16%, 20=0.39%, 50=5.14%, 100=67.43%, 250=26.88% 00:22:55.101 cpu : usr=0.40%, sys=2.49%, ctx=1754, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=7486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job6: (groupid=0, jobs=1): err= 0: pid=2440884: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=671, BW=168MiB/s (176MB/s)(1685MiB/10044msec) 00:22:55.101 slat (usec): min=6, max=60995, avg=1343.82, stdev=3830.59 00:22:55.101 clat (msec): min=7, max=167, avg=93.92, stdev=27.38 00:22:55.101 lat (msec): min=7, max=169, avg=95.26, stdev=27.89 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 54], 20.00th=[ 77], 00:22:55.101 | 30.00th=[ 89], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 105], 00:22:55.101 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 123], 95.00th=[ 128], 00:22:55.101 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:22:55.101 | 99.99th=[ 169] 00:22:55.101 bw ( KiB/s): min=132608, max=255488, per=7.28%, avg=170956.80, stdev=32581.74, samples=20 00:22:55.101 iops : min= 518, max= 998, avg=667.80, stdev=127.27, samples=20 00:22:55.101 lat (msec) : 10=0.12%, 20=2.12%, 50=6.41%, 100=41.36%, 250=49.99% 00:22:55.101 cpu : usr=0.26%, sys=2.10%, ctx=1680, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=6741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job7: (groupid=0, jobs=1): err= 0: pid=2440885: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=622, BW=156MiB/s (163MB/s)(1569MiB/10081msec) 00:22:55.101 slat (usec): min=8, max=49521, avg=1591.10, stdev=4042.19 00:22:55.101 clat (msec): min=17, max=171, avg=101.06, stdev=21.82 00:22:55.101 lat (msec): min=20, max=182, avg=102.65, stdev=22.21 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 40], 5.00th=[ 54], 10.00th=[ 69], 20.00th=[ 88], 00:22:55.101 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 109], 00:22:55.101 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 126], 95.00th=[ 130], 00:22:55.101 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 167], 00:22:55.101 | 99.99th=[ 171] 00:22:55.101 bw ( KiB/s): min=126976, max=256000, per=6.77%, avg=159042.85, stdev=32103.86, samples=20 00:22:55.101 iops : min= 496, max= 1000, avg=621.10, stdev=125.41, samples=20 00:22:55.101 lat (msec) : 20=0.02%, 50=3.65%, 100=36.29%, 250=60.04% 00:22:55.101 cpu : usr=0.36%, sys=2.20%, ctx=1407, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=6274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job8: (groupid=0, jobs=1): err= 0: pid=2440886: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=608, BW=152MiB/s (159MB/s)(1533MiB/10081msec) 00:22:55.101 slat (usec): min=8, max=67433, avg=1481.82, stdev=4017.02 00:22:55.101 clat (msec): min=20, max=181, avg=103.58, stdev=18.80 00:22:55.101 lat (msec): min=22, max=184, avg=105.06, stdev=19.20 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 44], 5.00th=[ 71], 10.00th=[ 83], 20.00th=[ 91], 00:22:55.101 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 108], 00:22:55.101 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 132], 00:22:55.101 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 178], 99.95th=[ 182], 00:22:55.101 | 99.99th=[ 182] 00:22:55.101 bw ( KiB/s): min=129788, max=179712, per=6.61%, avg=155362.75, stdev=14881.04, samples=20 00:22:55.101 iops : min= 506, max= 702, avg=606.75, stdev=58.18, samples=20 00:22:55.101 lat (msec) : 50=1.53%, 100=37.68%, 250=60.78% 00:22:55.101 cpu : usr=0.19%, sys=2.30%, ctx=1487, majf=0, minf=4097 00:22:55.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:55.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.101 issued rwts: total=6130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.101 job9: (groupid=0, jobs=1): err= 0: pid=2440887: Sat Jun 8 21:18:31 2024 00:22:55.101 read: IOPS=1162, BW=291MiB/s (305MB/s)(2930MiB/10085msec) 00:22:55.101 slat (usec): min=7, max=83552, avg=805.97, stdev=2672.88 00:22:55.101 clat (msec): min=9, max=182, avg=54.19, stdev=27.75 00:22:55.101 lat (msec): min=9, max=234, avg=55.00, stdev=28.13 00:22:55.101 clat percentiles (msec): 00:22:55.101 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 34], 00:22:55.101 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 42], 60.00th=[ 50], 00:22:55.101 | 70.00th=[ 58], 80.00th=[ 77], 90.00th=[ 101], 95.00th=[ 113], 00:22:55.101 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:22:55.101 | 99.99th=[ 182] 00:22:55.101 bw ( KiB/s): min=114176, max=495616, per=12.71%, avg=298453.95, stdev=117239.10, samples=20 00:22:55.102 iops : min= 446, max= 1936, avg=1165.80, stdev=457.98, samples=20 00:22:55.102 lat (msec) : 10=0.06%, 20=0.29%, 50=61.10%, 100=28.71%, 250=9.84% 00:22:55.102 cpu : usr=0.36%, sys=3.71%, ctx=2368, majf=0, minf=4097 00:22:55.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:55.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.102 issued rwts: total=11719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.102 job10: (groupid=0, jobs=1): err= 0: pid=2440888: Sat Jun 8 21:18:31 2024 00:22:55.102 read: IOPS=660, BW=165MiB/s (173MB/s)(1664MiB/10074msec) 00:22:55.102 slat (usec): min=9, max=70469, avg=1335.77, stdev=3734.95 00:22:55.102 clat (msec): min=4, max=191, avg=95.39, stdev=24.32 00:22:55.102 lat (msec): min=4, max=196, avg=96.72, stdev=24.73 00:22:55.102 clat percentiles (msec): 00:22:55.102 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 65], 20.00th=[ 74], 00:22:55.102 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 100], 60.00th=[ 104], 00:22:55.102 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 123], 95.00th=[ 130], 00:22:55.102 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 188], 00:22:55.102 | 99.99th=[ 192] 00:22:55.102 bw ( KiB/s): min=123392, max=296960, per=7.18%, avg=168766.80, stdev=38662.66, samples=20 00:22:55.102 iops : min= 482, max= 1160, avg=659.15, stdev=151.07, samples=20 00:22:55.102 lat (msec) : 10=0.35%, 20=0.29%, 50=3.26%, 100=48.78%, 250=47.33% 00:22:55.102 cpu : usr=0.45%, sys=2.27%, ctx=1605, majf=0, minf=3534 00:22:55.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:55.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:55.102 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:55.102 00:22:55.102 Run status group 0 (all jobs): 00:22:55.102 READ: bw=2294MiB/s (2405MB/s), 152MiB/s-333MiB/s (159MB/s-349MB/s), io=22.6GiB (24.3GB), run=10013-10090msec 00:22:55.102 00:22:55.102 Disk stats (read/write): 00:22:55.102 nvme0n1: ios=26734/0, merge=0/0, ticks=1236363/0, in_queue=1236363, util=95.54% 00:22:55.102 nvme10n1: ios=20851/0, merge=0/0, ticks=1239030/0, in_queue=1239030, util=95.96% 00:22:55.102 nvme1n1: ios=14956/0, merge=0/0, ticks=1234493/0, in_queue=1234493, util=96.69% 00:22:55.102 nvme2n1: ios=13481/0, merge=0/0, ticks=1235279/0, in_queue=1235279, util=97.02% 00:22:55.102 nvme3n1: ios=18557/0, merge=0/0, ticks=1240832/0, in_queue=1240832, util=97.13% 00:22:55.102 nvme4n1: ios=14852/0, merge=0/0, ticks=1231957/0, in_queue=1231957, util=97.93% 00:22:55.102 nvme5n1: ios=13332/0, merge=0/0, ticks=1234231/0, in_queue=1234231, util=98.02% 00:22:55.102 nvme6n1: ios=12428/0, merge=0/0, ticks=1229904/0, in_queue=1229904, util=98.25% 00:22:55.102 nvme7n1: ios=12145/0, merge=0/0, ticks=1232146/0, in_queue=1232146, util=98.79% 00:22:55.102 nvme8n1: ios=23313/0, merge=0/0, ticks=1237687/0, in_queue=1237687, util=98.98% 00:22:55.102 nvme9n1: ios=13212/0, merge=0/0, ticks=1235059/0, in_queue=1235059, util=99.29% 00:22:55.102 21:18:31 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:55.102 [global] 00:22:55.102 thread=1 00:22:55.102 invalidate=1 00:22:55.102 rw=randwrite 00:22:55.102 time_based=1 00:22:55.102 runtime=10 00:22:55.102 ioengine=libaio 00:22:55.102 direct=1 00:22:55.102 bs=262144 00:22:55.102 iodepth=64 00:22:55.102 norandommap=1 00:22:55.102 numjobs=1 00:22:55.102 00:22:55.102 [job0] 00:22:55.102 filename=/dev/nvme0n1 00:22:55.102 [job1] 00:22:55.102 filename=/dev/nvme10n1 00:22:55.102 [job2] 00:22:55.102 filename=/dev/nvme1n1 00:22:55.102 [job3] 00:22:55.102 filename=/dev/nvme2n1 00:22:55.102 [job4] 00:22:55.102 filename=/dev/nvme3n1 00:22:55.102 [job5] 00:22:55.102 filename=/dev/nvme4n1 00:22:55.102 [job6] 00:22:55.102 filename=/dev/nvme5n1 00:22:55.102 [job7] 00:22:55.102 filename=/dev/nvme6n1 00:22:55.102 [job8] 00:22:55.102 filename=/dev/nvme7n1 00:22:55.102 [job9] 00:22:55.102 filename=/dev/nvme8n1 00:22:55.102 [job10] 00:22:55.102 filename=/dev/nvme9n1 00:22:55.102 Could not set queue depth (nvme0n1) 00:22:55.102 Could not set queue depth (nvme10n1) 00:22:55.102 Could not set queue depth (nvme1n1) 00:22:55.102 Could not set queue depth (nvme2n1) 00:22:55.102 Could not set queue depth (nvme3n1) 00:22:55.102 Could not set queue depth (nvme4n1) 00:22:55.102 Could not set queue depth (nvme5n1) 00:22:55.102 Could not set queue depth (nvme6n1) 00:22:55.102 Could not set queue depth (nvme7n1) 00:22:55.102 Could not set queue depth (nvme8n1) 00:22:55.102 Could not set queue depth (nvme9n1) 00:22:55.102 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:55.102 fio-3.35 00:22:55.102 Starting 11 threads 00:23:05.099 00:23:05.099 job0: (groupid=0, jobs=1): err= 0: pid=2443291: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=676, BW=169MiB/s (177MB/s)(1706MiB/10092msec); 0 zone resets 00:23:05.099 slat (usec): min=24, max=33593, avg=1452.15, stdev=2594.73 00:23:05.099 clat (msec): min=21, max=197, avg=93.16, stdev=10.79 00:23:05.099 lat (msec): min=21, max=198, avg=94.62, stdev=10.75 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 73], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:23:05.099 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 94], 00:23:05.099 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 106], 00:23:05.099 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 188], 99.95th=[ 194], 00:23:05.099 | 99.99th=[ 199] 00:23:05.099 bw ( KiB/s): min=138240, max=186368, per=10.80%, avg=173056.00, stdev=10730.81, samples=20 00:23:05.099 iops : min= 540, max= 728, avg=676.00, stdev=41.92, samples=20 00:23:05.099 lat (msec) : 50=0.28%, 100=86.90%, 250=12.82% 00:23:05.099 cpu : usr=1.55%, sys=2.30%, ctx=1781, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,6823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job1: (groupid=0, jobs=1): err= 0: pid=2443307: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=592, BW=148MiB/s (155MB/s)(1490MiB/10054msec); 0 zone resets 00:23:05.099 slat (usec): min=24, max=75572, avg=1465.92, stdev=3596.40 00:23:05.099 clat (msec): min=7, max=304, avg=106.50, stdev=41.72 00:23:05.099 lat (msec): min=7, max=304, avg=107.97, stdev=42.14 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 69], 20.00th=[ 79], 00:23:05.099 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 97], 60.00th=[ 107], 00:23:05.099 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 157], 95.00th=[ 180], 00:23:05.099 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 305], 00:23:05.099 | 99.99th=[ 305] 00:23:05.099 bw ( KiB/s): min=82432, max=237568, per=9.42%, avg=150912.00, stdev=40465.52, samples=20 00:23:05.099 iops : min= 322, max= 928, avg=589.50, stdev=158.07, samples=20 00:23:05.099 lat (msec) : 10=0.03%, 20=1.96%, 50=3.88%, 100=47.90%, 250=45.23% 00:23:05.099 lat (msec) : 500=0.99% 00:23:05.099 cpu : usr=1.37%, sys=1.95%, ctx=2219, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,5958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job2: (groupid=0, jobs=1): err= 0: pid=2443338: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=569, BW=142MiB/s (149MB/s)(1442MiB/10119msec); 0 zone resets 00:23:05.099 slat (usec): min=26, max=57094, avg=1683.52, stdev=3471.86 00:23:05.099 clat (msec): min=14, max=261, avg=110.57, stdev=26.24 00:23:05.099 lat (msec): min=16, max=261, avg=112.26, stdev=26.47 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 43], 5.00th=[ 75], 10.00th=[ 86], 20.00th=[ 97], 00:23:05.099 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 112], 00:23:05.099 | 70.00th=[ 118], 80.00th=[ 126], 90.00th=[ 134], 95.00th=[ 153], 00:23:05.099 | 99.00th=[ 220], 99.50th=[ 247], 99.90th=[ 253], 99.95th=[ 253], 00:23:05.099 | 99.99th=[ 262] 00:23:05.099 bw ( KiB/s): min=95232, max=207872, per=9.12%, avg=146022.40, stdev=24872.65, samples=20 00:23:05.099 iops : min= 372, max= 812, avg=570.40, stdev=97.16, samples=20 00:23:05.099 lat (msec) : 20=0.14%, 50=1.11%, 100=25.09%, 250=73.44%, 500=0.23% 00:23:05.099 cpu : usr=1.43%, sys=1.83%, ctx=1660, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,5767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job3: (groupid=0, jobs=1): err= 0: pid=2443351: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=511, BW=128MiB/s (134MB/s)(1295MiB/10131msec); 0 zone resets 00:23:05.099 slat (usec): min=19, max=74113, avg=1848.73, stdev=4161.27 00:23:05.099 clat (msec): min=6, max=301, avg=123.27, stdev=38.32 00:23:05.099 lat (msec): min=8, max=301, avg=125.12, stdev=38.62 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 36], 5.00th=[ 75], 10.00th=[ 87], 20.00th=[ 96], 00:23:05.099 | 30.00th=[ 105], 40.00th=[ 111], 50.00th=[ 116], 60.00th=[ 125], 00:23:05.099 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 165], 95.00th=[ 186], 00:23:05.099 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 296], 00:23:05.099 | 99.99th=[ 300] 00:23:05.099 bw ( KiB/s): min=75776, max=194560, per=8.18%, avg=130969.60, stdev=29433.54, samples=20 00:23:05.099 iops : min= 296, max= 760, avg=511.60, stdev=114.97, samples=20 00:23:05.099 lat (msec) : 10=0.04%, 20=0.44%, 50=1.54%, 100=21.78%, 250=74.34% 00:23:05.099 lat (msec) : 500=1.85% 00:23:05.099 cpu : usr=1.19%, sys=1.42%, ctx=1570, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,5179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job4: (groupid=0, jobs=1): err= 0: pid=2443357: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=512, BW=128MiB/s (134MB/s)(1291MiB/10081msec); 0 zone resets 00:23:05.099 slat (usec): min=26, max=172869, avg=1832.08, stdev=6258.42 00:23:05.099 clat (msec): min=35, max=511, avg=123.03, stdev=71.55 00:23:05.099 lat (msec): min=35, max=511, avg=124.87, stdev=72.49 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 67], 5.00th=[ 79], 10.00th=[ 84], 20.00th=[ 90], 00:23:05.099 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 106], 00:23:05.099 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 201], 95.00th=[ 326], 00:23:05.099 | 99.00th=[ 409], 99.50th=[ 464], 99.90th=[ 510], 99.95th=[ 510], 00:23:05.099 | 99.99th=[ 510] 00:23:05.099 bw ( KiB/s): min=37376, max=195072, per=8.15%, avg=130585.60, stdev=47524.34, samples=20 00:23:05.099 iops : min= 146, max= 762, avg=510.10, stdev=185.64, samples=20 00:23:05.099 lat (msec) : 50=0.19%, 100=44.29%, 250=48.51%, 500=6.74%, 750=0.27% 00:23:05.099 cpu : usr=1.35%, sys=1.61%, ctx=1571, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,5164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job5: (groupid=0, jobs=1): err= 0: pid=2443369: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=661, BW=165MiB/s (173MB/s)(1664MiB/10070msec); 0 zone resets 00:23:05.099 slat (usec): min=15, max=41306, avg=1462.84, stdev=2717.07 00:23:05.099 clat (msec): min=3, max=175, avg=95.33, stdev=20.15 00:23:05.099 lat (msec): min=5, max=175, avg=96.79, stdev=20.31 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 43], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 80], 00:23:05.099 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 102], 00:23:05.099 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 129], 00:23:05.099 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 176], 00:23:05.099 | 99.99th=[ 176] 00:23:05.099 bw ( KiB/s): min=122880, max=213504, per=10.54%, avg=168806.40, stdev=27707.87, samples=20 00:23:05.099 iops : min= 480, max= 834, avg=659.40, stdev=108.23, samples=20 00:23:05.099 lat (msec) : 4=0.02%, 10=0.06%, 20=0.33%, 50=0.77%, 100=57.52% 00:23:05.099 lat (msec) : 250=41.31% 00:23:05.099 cpu : usr=1.47%, sys=1.75%, ctx=1885, majf=0, minf=1 00:23:05.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:05.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.099 issued rwts: total=0,6657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.099 job6: (groupid=0, jobs=1): err= 0: pid=2443370: Sat Jun 8 21:18:42 2024 00:23:05.099 write: IOPS=482, BW=121MiB/s (127MB/s)(1223MiB/10132msec); 0 zone resets 00:23:05.099 slat (usec): min=24, max=62801, avg=1940.10, stdev=4170.49 00:23:05.099 clat (msec): min=27, max=315, avg=130.42, stdev=33.24 00:23:05.099 lat (msec): min=27, max=315, avg=132.36, stdev=33.45 00:23:05.099 clat percentiles (msec): 00:23:05.099 | 1.00th=[ 62], 5.00th=[ 91], 10.00th=[ 102], 20.00th=[ 107], 00:23:05.099 | 30.00th=[ 112], 40.00th=[ 118], 50.00th=[ 124], 60.00th=[ 131], 00:23:05.099 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 190], 00:23:05.100 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:23:05.100 | 99.99th=[ 317] 00:23:05.100 bw ( KiB/s): min=78336, max=168448, per=7.71%, avg=123571.20, stdev=20994.48, samples=20 00:23:05.100 iops : min= 306, max= 658, avg=482.70, stdev=82.01, samples=20 00:23:05.100 lat (msec) : 50=0.33%, 100=9.16%, 250=89.43%, 500=1.08% 00:23:05.100 cpu : usr=1.15%, sys=1.26%, ctx=1453, majf=0, minf=1 00:23:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.100 issued rwts: total=0,4890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.100 job7: (groupid=0, jobs=1): err= 0: pid=2443371: Sat Jun 8 21:18:42 2024 00:23:05.100 write: IOPS=618, BW=155MiB/s (162MB/s)(1556MiB/10070msec); 0 zone resets 00:23:05.100 slat (usec): min=16, max=141957, avg=1527.07, stdev=3654.31 00:23:05.100 clat (msec): min=7, max=254, avg=101.94, stdev=24.10 00:23:05.100 lat (msec): min=10, max=255, avg=103.47, stdev=24.28 00:23:05.100 clat percentiles (msec): 00:23:05.100 | 1.00th=[ 44], 5.00th=[ 75], 10.00th=[ 82], 20.00th=[ 89], 00:23:05.100 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 103], 00:23:05.100 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 126], 95.00th=[ 136], 00:23:05.100 | 99.00th=[ 207], 99.50th=[ 226], 99.90th=[ 249], 99.95th=[ 249], 00:23:05.100 | 99.99th=[ 255] 00:23:05.100 bw ( KiB/s): min=104960, max=193024, per=9.85%, avg=157721.60, stdev=25172.54, samples=20 00:23:05.100 iops : min= 410, max= 754, avg=616.10, stdev=98.33, samples=20 00:23:05.100 lat (msec) : 10=0.02%, 20=0.22%, 50=1.29%, 100=52.09%, 250=46.35% 00:23:05.100 lat (msec) : 500=0.03% 00:23:05.100 cpu : usr=1.36%, sys=2.12%, ctx=1853, majf=0, minf=1 00:23:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.100 issued rwts: total=0,6224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.100 job8: (groupid=0, jobs=1): err= 0: pid=2443372: Sat Jun 8 21:18:42 2024 00:23:05.100 write: IOPS=515, BW=129MiB/s (135MB/s)(1300MiB/10091msec); 0 zone resets 00:23:05.100 slat (usec): min=22, max=57795, avg=1823.85, stdev=3841.77 00:23:05.100 clat (msec): min=6, max=267, avg=122.36, stdev=36.16 00:23:05.100 lat (msec): min=7, max=267, avg=124.18, stdev=36.60 00:23:05.100 clat percentiles (msec): 00:23:05.100 | 1.00th=[ 46], 5.00th=[ 86], 10.00th=[ 91], 20.00th=[ 96], 00:23:05.100 | 30.00th=[ 100], 40.00th=[ 106], 50.00th=[ 115], 60.00th=[ 124], 00:23:05.100 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 199], 00:23:05.100 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 266], 99.95th=[ 268], 00:23:05.100 | 99.99th=[ 268] 00:23:05.100 bw ( KiB/s): min=77824, max=185344, per=8.21%, avg=131481.60, stdev=32953.41, samples=20 00:23:05.100 iops : min= 304, max= 724, avg=513.60, stdev=128.72, samples=20 00:23:05.100 lat (msec) : 10=0.04%, 20=0.17%, 50=1.04%, 100=29.97%, 250=68.38% 00:23:05.100 lat (msec) : 500=0.40% 00:23:05.100 cpu : usr=1.26%, sys=1.58%, ctx=1611, majf=0, minf=1 00:23:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.100 issued rwts: total=0,5199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.100 job9: (groupid=0, jobs=1): err= 0: pid=2443373: Sat Jun 8 21:18:42 2024 00:23:05.100 write: IOPS=614, BW=154MiB/s (161MB/s)(1549MiB/10090msec); 0 zone resets 00:23:05.100 slat (usec): min=21, max=113003, avg=1581.38, stdev=3584.84 00:23:05.100 clat (msec): min=13, max=221, avg=102.57, stdev=23.37 00:23:05.100 lat (msec): min=13, max=221, avg=104.15, stdev=23.47 00:23:05.100 clat percentiles (msec): 00:23:05.100 | 1.00th=[ 45], 5.00th=[ 71], 10.00th=[ 79], 20.00th=[ 86], 00:23:05.100 | 30.00th=[ 91], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 106], 00:23:05.100 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 133], 95.00th=[ 144], 00:23:05.100 | 99.00th=[ 174], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 215], 00:23:05.100 | 99.99th=[ 222] 00:23:05.100 bw ( KiB/s): min=123392, max=216576, per=9.80%, avg=157030.40, stdev=23255.53, samples=20 00:23:05.100 iops : min= 482, max= 846, avg=613.40, stdev=90.84, samples=20 00:23:05.100 lat (msec) : 20=0.13%, 50=1.15%, 100=47.56%, 250=51.17% 00:23:05.100 cpu : usr=1.33%, sys=1.75%, ctx=1668, majf=0, minf=1 00:23:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.100 issued rwts: total=0,6197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.100 job10: (groupid=0, jobs=1): err= 0: pid=2443374: Sat Jun 8 21:18:42 2024 00:23:05.100 write: IOPS=528, BW=132MiB/s (139MB/s)(1336MiB/10107msec); 0 zone resets 00:23:05.100 slat (usec): min=24, max=87931, avg=1823.55, stdev=4576.64 00:23:05.100 clat (msec): min=8, max=485, avg=119.13, stdev=59.41 00:23:05.100 lat (msec): min=8, max=506, avg=120.96, stdev=60.13 00:23:05.100 clat percentiles (msec): 00:23:05.100 | 1.00th=[ 27], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 75], 00:23:05.100 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 103], 60.00th=[ 116], 00:23:05.100 | 70.00th=[ 136], 80.00th=[ 157], 90.00th=[ 201], 95.00th=[ 224], 00:23:05.100 | 99.00th=[ 359], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 485], 00:23:05.100 | 99.99th=[ 485] 00:23:05.100 bw ( KiB/s): min=71168, max=224256, per=8.44%, avg=135219.20, stdev=50149.02, samples=20 00:23:05.100 iops : min= 278, max= 876, avg=528.20, stdev=195.89, samples=20 00:23:05.100 lat (msec) : 10=0.02%, 20=0.51%, 50=2.21%, 100=46.02%, 250=48.62% 00:23:05.100 lat (msec) : 500=2.62% 00:23:05.100 cpu : usr=1.08%, sys=1.55%, ctx=1477, majf=0, minf=1 00:23:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:05.100 issued rwts: total=0,5345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:05.100 00:23:05.100 Run status group 0 (all jobs): 00:23:05.100 WRITE: bw=1564MiB/s (1640MB/s), 121MiB/s-169MiB/s (127MB/s-177MB/s), io=15.5GiB (16.6GB), run=10054-10132msec 00:23:05.100 00:23:05.100 Disk stats (read/write): 00:23:05.100 nvme0n1: ios=47/13323, merge=0/0, ticks=1415/1197071, in_queue=1198486, util=99.70% 00:23:05.100 nvme10n1: ios=49/11417, merge=0/0, ticks=137/1196083, in_queue=1196220, util=97.39% 00:23:05.100 nvme1n1: ios=28/11495, merge=0/0, ticks=317/1226001, in_queue=1226318, util=99.62% 00:23:05.100 nvme2n1: ios=48/10302, merge=0/0, ticks=1288/1220977, in_queue=1222265, util=100.00% 00:23:05.100 nvme3n1: ios=39/9994, merge=0/0, ticks=1422/1197981, in_queue=1199403, util=100.00% 00:23:05.100 nvme4n1: ios=0/12933, merge=0/0, ticks=0/1197819, in_queue=1197819, util=97.71% 00:23:05.100 nvme5n1: ios=44/9723, merge=0/0, ticks=995/1219098, in_queue=1220093, util=100.00% 00:23:05.100 nvme6n1: ios=39/12108, merge=0/0, ticks=1126/1180758, in_queue=1181884, util=100.00% 00:23:05.100 nvme7n1: ios=0/10397, merge=0/0, ticks=0/1231118, in_queue=1231118, util=98.67% 00:23:05.100 nvme8n1: ios=48/12001, merge=0/0, ticks=3831/1185344, in_queue=1189175, util=100.00% 00:23:05.100 nvme9n1: ios=43/10670, merge=0/0, ticks=2055/1210421, in_queue=1212476, util=100.00% 00:23:05.100 21:18:42 -- target/multiconnection.sh@36 -- # sync 00:23:05.100 21:18:42 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:05.100 21:18:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.100 21:18:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:05.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.100 21:18:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:05.100 21:18:42 -- common/autotest_common.sh@1198 -- # local i=0 00:23:05.100 21:18:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:05.100 21:18:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:05.100 21:18:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:05.100 21:18:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:05.100 21:18:42 -- common/autotest_common.sh@1210 -- # return 0 00:23:05.100 21:18:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.100 21:18:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.100 21:18:42 -- common/autotest_common.sh@10 -- # set +x 00:23:05.100 21:18:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.100 21:18:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.100 21:18:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:05.100 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:05.100 21:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:05.100 21:18:43 -- common/autotest_common.sh@1198 -- # local i=0 00:23:05.100 21:18:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:05.100 21:18:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:05.100 21:18:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:05.101 21:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:05.101 21:18:43 -- common/autotest_common.sh@1210 -- # return 0 00:23:05.101 21:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:05.101 21:18:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.101 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:05.361 21:18:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.361 21:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.361 21:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:05.621 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:05.621 21:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:05.621 21:18:43 -- common/autotest_common.sh@1198 -- # local i=0 00:23:05.621 21:18:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:05.621 21:18:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:05.621 21:18:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:05.621 21:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:05.621 21:18:43 -- common/autotest_common.sh@1210 -- # return 0 00:23:05.621 21:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:05.621 21:18:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.621 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:05.621 21:18:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.621 21:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.621 21:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:05.882 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:05.882 21:18:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:05.882 21:18:43 -- common/autotest_common.sh@1198 -- # local i=0 00:23:05.882 21:18:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:05.882 21:18:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:05.882 21:18:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:05.882 21:18:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:05.882 21:18:43 -- common/autotest_common.sh@1210 -- # return 0 00:23:05.882 21:18:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:05.882 21:18:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.882 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:23:05.882 21:18:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.882 21:18:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.882 21:18:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:06.142 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:06.142 21:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:06.142 21:18:44 -- common/autotest_common.sh@1198 -- # local i=0 00:23:06.142 21:18:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:06.142 21:18:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:06.403 21:18:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:06.403 21:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:06.403 21:18:44 -- common/autotest_common.sh@1210 -- # return 0 00:23:06.403 21:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:06.403 21:18:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.403 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:06.403 21:18:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.403 21:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.403 21:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:06.403 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:06.403 21:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:06.403 21:18:44 -- common/autotest_common.sh@1198 -- # local i=0 00:23:06.403 21:18:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:06.403 21:18:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:06.403 21:18:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:06.403 21:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:06.403 21:18:44 -- common/autotest_common.sh@1210 -- # return 0 00:23:06.404 21:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:06.404 21:18:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.404 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:06.404 21:18:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.404 21:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.404 21:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:06.665 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:06.665 21:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:06.665 21:18:44 -- common/autotest_common.sh@1198 -- # local i=0 00:23:06.665 21:18:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:06.665 21:18:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:06.665 21:18:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:06.665 21:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:06.665 21:18:44 -- common/autotest_common.sh@1210 -- # return 0 00:23:06.665 21:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:06.665 21:18:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.665 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:06.665 21:18:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.665 21:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.665 21:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:06.926 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:06.926 21:18:44 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:06.926 21:18:44 -- common/autotest_common.sh@1198 -- # local i=0 00:23:06.926 21:18:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:06.926 21:18:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:06.926 21:18:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:06.926 21:18:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:06.926 21:18:44 -- common/autotest_common.sh@1210 -- # return 0 00:23:06.926 21:18:44 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:06.926 21:18:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.926 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:23:06.926 21:18:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.926 21:18:44 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.926 21:18:44 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:07.188 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:07.188 21:18:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:07.188 21:18:45 -- common/autotest_common.sh@1198 -- # local i=0 00:23:07.188 21:18:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:07.188 21:18:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:07.188 21:18:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:07.188 21:18:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:07.188 21:18:45 -- common/autotest_common.sh@1210 -- # return 0 00:23:07.188 21:18:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:07.188 21:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.188 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:23:07.188 21:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.188 21:18:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.188 21:18:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:07.188 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:07.188 21:18:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:07.188 21:18:45 -- common/autotest_common.sh@1198 -- # local i=0 00:23:07.188 21:18:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:07.188 21:18:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:07.449 21:18:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:07.449 21:18:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:07.449 21:18:45 -- common/autotest_common.sh@1210 -- # return 0 00:23:07.449 21:18:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:07.449 21:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.449 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:23:07.449 21:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.449 21:18:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.449 21:18:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:07.449 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:07.449 21:18:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:07.449 21:18:45 -- common/autotest_common.sh@1198 -- # local i=0 00:23:07.449 21:18:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:07.449 21:18:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:07.449 21:18:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:07.449 21:18:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:07.449 21:18:45 -- common/autotest_common.sh@1210 -- # return 0 00:23:07.449 21:18:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:07.449 21:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.449 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:23:07.449 21:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.449 21:18:45 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:07.449 21:18:45 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:07.449 21:18:45 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:07.449 21:18:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:07.449 21:18:45 -- nvmf/common.sh@116 -- # sync 00:23:07.449 21:18:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:07.449 21:18:45 -- nvmf/common.sh@119 -- # set +e 00:23:07.449 21:18:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:07.449 21:18:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:07.449 rmmod nvme_tcp 00:23:07.449 rmmod nvme_fabrics 00:23:07.449 rmmod nvme_keyring 00:23:07.449 21:18:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:07.449 21:18:45 -- nvmf/common.sh@123 -- # set -e 00:23:07.449 21:18:45 -- nvmf/common.sh@124 -- # return 0 00:23:07.449 21:18:45 -- nvmf/common.sh@477 -- # '[' -n 2431467 ']' 00:23:07.449 21:18:45 -- nvmf/common.sh@478 -- # killprocess 2431467 00:23:07.449 21:18:45 -- common/autotest_common.sh@926 -- # '[' -z 2431467 ']' 00:23:07.449 21:18:45 -- common/autotest_common.sh@930 -- # kill -0 2431467 00:23:07.449 21:18:45 -- common/autotest_common.sh@931 -- # uname 00:23:07.449 21:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.710 21:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2431467 00:23:07.710 21:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:07.710 21:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:07.710 21:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2431467' 00:23:07.710 killing process with pid 2431467 00:23:07.710 21:18:45 -- common/autotest_common.sh@945 -- # kill 2431467 00:23:07.710 21:18:45 -- common/autotest_common.sh@950 -- # wait 2431467 00:23:07.972 21:18:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:07.972 21:18:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:07.972 21:18:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:07.972 21:18:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.972 21:18:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:07.972 21:18:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.972 21:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.972 21:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.888 21:18:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:09.888 00:23:09.888 real 1m17.261s 00:23:09.888 user 4m49.232s 00:23:09.888 sys 0m22.796s 00:23:09.888 21:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.888 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 ************************************ 00:23:09.888 END TEST nvmf_multiconnection 00:23:09.888 ************************************ 00:23:09.888 21:18:47 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:09.888 21:18:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:09.888 21:18:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:09.888 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 ************************************ 00:23:09.888 START TEST nvmf_initiator_timeout 00:23:09.888 ************************************ 00:23:10.149 21:18:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:10.149 * Looking for test storage... 00:23:10.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:10.149 21:18:48 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.149 21:18:48 -- nvmf/common.sh@7 -- # uname -s 00:23:10.149 21:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.149 21:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.149 21:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.149 21:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.149 21:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.149 21:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.149 21:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.149 21:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.149 21:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.149 21:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.149 21:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.149 21:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.149 21:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.149 21:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.149 21:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.149 21:18:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.149 21:18:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.149 21:18:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.149 21:18:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.149 21:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.150 21:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.150 21:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.150 21:18:48 -- paths/export.sh@5 -- # export PATH 00:23:10.150 21:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.150 21:18:48 -- nvmf/common.sh@46 -- # : 0 00:23:10.150 21:18:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:10.150 21:18:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:10.150 21:18:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:10.150 21:18:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.150 21:18:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.150 21:18:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:10.150 21:18:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:10.150 21:18:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:10.150 21:18:48 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.150 21:18:48 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.150 21:18:48 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:10.150 21:18:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:10.150 21:18:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.150 21:18:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:10.150 21:18:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:10.150 21:18:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:10.150 21:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.150 21:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.150 21:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.150 21:18:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:10.150 21:18:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:10.150 21:18:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:10.150 21:18:48 -- common/autotest_common.sh@10 -- # set +x 00:23:16.741 21:18:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.741 21:18:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:16.741 21:18:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:16.741 21:18:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:16.741 21:18:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:16.741 21:18:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:16.741 21:18:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:16.741 21:18:54 -- nvmf/common.sh@294 -- # net_devs=() 00:23:16.741 21:18:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:16.741 21:18:54 -- nvmf/common.sh@295 -- # e810=() 00:23:16.741 21:18:54 -- nvmf/common.sh@295 -- # local -ga e810 00:23:16.741 21:18:54 -- nvmf/common.sh@296 -- # x722=() 00:23:16.741 21:18:54 -- nvmf/common.sh@296 -- # local -ga x722 00:23:16.741 21:18:54 -- nvmf/common.sh@297 -- # mlx=() 00:23:16.741 21:18:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:16.741 21:18:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.741 21:18:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.742 21:18:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.742 21:18:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.742 21:18:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:16.742 21:18:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:16.742 21:18:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.742 21:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.742 21:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.742 21:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.742 21:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.742 21:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.742 21:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.742 21:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.742 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.742 21:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.742 21:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.742 21:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.742 21:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.742 21:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.742 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.742 21:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.742 21:18:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:16.742 21:18:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:16.742 21:18:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:16.742 21:18:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.742 21:18:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.742 21:18:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.742 21:18:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:16.742 21:18:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.742 21:18:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.742 21:18:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:16.742 21:18:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.742 21:18:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.742 21:18:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:16.742 21:18:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:16.742 21:18:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.742 21:18:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.003 21:18:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.003 21:18:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.003 21:18:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:17.003 21:18:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.003 21:18:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.003 21:18:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.264 21:18:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:17.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:23:17.264 00:23:17.264 --- 10.0.0.2 ping statistics --- 00:23:17.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.264 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:23:17.264 21:18:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:23:17.264 00:23:17.264 --- 10.0.0.1 ping statistics --- 00:23:17.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.264 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:23:17.264 21:18:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.264 21:18:55 -- nvmf/common.sh@410 -- # return 0 00:23:17.264 21:18:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:17.264 21:18:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.264 21:18:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:17.264 21:18:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:17.264 21:18:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.264 21:18:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:17.264 21:18:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:17.264 21:18:55 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:17.264 21:18:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:17.264 21:18:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:17.264 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:17.264 21:18:55 -- nvmf/common.sh@469 -- # nvmfpid=2449692 00:23:17.264 21:18:55 -- nvmf/common.sh@470 -- # waitforlisten 2449692 00:23:17.264 21:18:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.264 21:18:55 -- common/autotest_common.sh@819 -- # '[' -z 2449692 ']' 00:23:17.264 21:18:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.264 21:18:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:17.264 21:18:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.264 21:18:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:17.264 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:17.264 [2024-06-08 21:18:55.211273] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:17.264 [2024-06-08 21:18:55.211338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.264 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.264 [2024-06-08 21:18:55.282358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.264 [2024-06-08 21:18:55.355703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:17.264 [2024-06-08 21:18:55.355839] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.264 [2024-06-08 21:18:55.355848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.264 [2024-06-08 21:18:55.355857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.525 [2024-06-08 21:18:55.355996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.525 [2024-06-08 21:18:55.356113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.525 [2024-06-08 21:18:55.356274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.525 [2024-06-08 21:18:55.356275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.097 21:18:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:18.097 21:18:55 -- common/autotest_common.sh@852 -- # return 0 00:23:18.097 21:18:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:18.097 21:18:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:18.097 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 21:18:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 Malloc0 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 Delay0 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 [2024-06-08 21:18:56.061703] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.097 21:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.097 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 [2024-06-08 21:18:56.098743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.097 21:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.097 21:18:56 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:19.511 21:18:57 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:19.511 21:18:57 -- common/autotest_common.sh@1177 -- # local i=0 00:23:19.511 21:18:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.511 21:18:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:19.511 21:18:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:22.055 21:18:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:22.055 21:18:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:22.056 21:18:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:22.056 21:18:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:22.056 21:18:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:22.056 21:18:59 -- common/autotest_common.sh@1187 -- # return 0 00:23:22.056 21:18:59 -- target/initiator_timeout.sh@35 -- # fio_pid=2450739 00:23:22.056 21:18:59 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:22.056 21:18:59 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:22.056 [global] 00:23:22.056 thread=1 00:23:22.056 invalidate=1 00:23:22.056 rw=write 00:23:22.056 time_based=1 00:23:22.056 runtime=60 00:23:22.056 ioengine=libaio 00:23:22.056 direct=1 00:23:22.056 bs=4096 00:23:22.056 iodepth=1 00:23:22.056 norandommap=0 00:23:22.056 numjobs=1 00:23:22.056 00:23:22.056 verify_dump=1 00:23:22.056 verify_backlog=512 00:23:22.056 verify_state_save=0 00:23:22.056 do_verify=1 00:23:22.056 verify=crc32c-intel 00:23:22.056 [job0] 00:23:22.056 filename=/dev/nvme0n1 00:23:22.056 Could not set queue depth (nvme0n1) 00:23:22.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:22.056 fio-3.35 00:23:22.056 Starting 1 thread 00:23:24.601 21:19:02 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:24.602 21:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.602 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.602 true 00:23:24.602 21:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.602 21:19:02 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:24.602 21:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.602 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.602 true 00:23:24.602 21:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.602 21:19:02 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:24.602 21:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.602 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.602 true 00:23:24.602 21:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.602 21:19:02 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:24.602 21:19:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:24.602 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.602 true 00:23:24.602 21:19:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:24.602 21:19:02 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:27.902 21:19:05 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:27.902 21:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:27.902 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:27.902 true 00:23:27.902 21:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:27.902 21:19:05 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:27.902 21:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:27.902 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:27.902 true 00:23:27.903 21:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:27.903 21:19:05 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:27.903 21:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:27.903 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:27.903 true 00:23:27.903 21:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:27.903 21:19:05 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:27.903 21:19:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:27.903 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:23:27.903 true 00:23:27.903 21:19:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:27.903 21:19:05 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:27.903 21:19:05 -- target/initiator_timeout.sh@54 -- # wait 2450739 00:24:24.174 00:24:24.174 job0: (groupid=0, jobs=1): err= 0: pid=2450904: Sat Jun 8 21:20:00 2024 00:24:24.174 read: IOPS=8, BW=33.3KiB/s (34.1kB/s)(2000KiB/60025msec) 00:24:24.174 slat (usec): min=7, max=3580, avg=32.61, stdev=159.05 00:24:24.174 clat (usec): min=481, max=42067k, avg=119542.11, stdev=1879757.13 00:24:24.174 lat (usec): min=507, max=42067k, avg=119574.73, stdev=1879756.90 00:24:24.174 clat percentiles (usec): 00:24:24.174 | 1.00th=[ 553], 5.00th=[ 594], 10.00th=[ 652], 00:24:24.174 | 20.00th=[ 41681], 30.00th=[ 41681], 40.00th=[ 42206], 00:24:24.174 | 50.00th=[ 42206], 60.00th=[ 42206], 70.00th=[ 42206], 00:24:24.174 | 80.00th=[ 42206], 90.00th=[ 42730], 95.00th=[ 42730], 00:24:24.174 | 99.00th=[ 43254], 99.50th=[ 43254], 99.90th=[17112761], 00:24:24.174 | 99.95th=[17112761], 99.99th=[17112761] 00:24:24.174 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60025msec); 0 zone resets 00:24:24.174 slat (usec): min=9, max=34109, avg=88.17, stdev=1506.52 00:24:24.174 clat (usec): min=187, max=536, avg=360.55, stdev=72.26 00:24:24.174 lat (usec): min=197, max=34435, avg=448.72, stdev=1506.97 00:24:24.174 clat percentiles (usec): 00:24:24.174 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 245], 20.00th=[ 310], 00:24:24.174 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 367], 00:24:24.174 | 70.00th=[ 412], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 469], 00:24:24.174 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 537], 00:24:24.174 | 99.99th=[ 537] 00:24:24.174 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:24:24.174 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:24.174 lat (usec) : 250=5.53%, 500=44.57%, 750=7.61%, 1000=0.20% 00:24:24.174 lat (msec) : 2=0.59%, 50=41.40%, >=2000=0.10% 00:24:24.174 cpu : usr=0.02%, sys=0.05%, ctx=1019, majf=0, minf=1 00:24:24.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.174 issued rwts: total=500,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.174 00:24:24.174 Run status group 0 (all jobs): 00:24:24.174 READ: bw=33.3KiB/s (34.1kB/s), 33.3KiB/s-33.3KiB/s (34.1kB/s-34.1kB/s), io=2000KiB (2048kB), run=60025-60025msec 00:24:24.174 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60025-60025msec 00:24:24.174 00:24:24.174 Disk stats (read/write): 00:24:24.174 nvme0n1: ios=548/512, merge=0/0, ticks=18952/181, in_queue=19133, util=99.70% 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:24.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:24.174 21:20:00 -- common/autotest_common.sh@1198 -- # local i=0 00:24:24.174 21:20:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:24.174 21:20:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:24.174 21:20:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:24.174 21:20:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:24.174 21:20:00 -- common/autotest_common.sh@1210 -- # return 0 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:24.174 nvmf hotplug test: fio successful as expected 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.174 21:20:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:24.174 21:20:00 -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 21:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:24.174 21:20:00 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:24.174 21:20:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:24.174 21:20:00 -- nvmf/common.sh@116 -- # sync 00:24:24.174 21:20:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:24.174 21:20:00 -- nvmf/common.sh@119 -- # set +e 00:24:24.174 21:20:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:24.174 21:20:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:24.174 rmmod nvme_tcp 00:24:24.174 rmmod nvme_fabrics 00:24:24.174 rmmod nvme_keyring 00:24:24.174 21:20:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:24.174 21:20:00 -- nvmf/common.sh@123 -- # set -e 00:24:24.174 21:20:00 -- nvmf/common.sh@124 -- # return 0 00:24:24.174 21:20:00 -- nvmf/common.sh@477 -- # '[' -n 2449692 ']' 00:24:24.174 21:20:00 -- nvmf/common.sh@478 -- # killprocess 2449692 00:24:24.174 21:20:00 -- common/autotest_common.sh@926 -- # '[' -z 2449692 ']' 00:24:24.175 21:20:00 -- common/autotest_common.sh@930 -- # kill -0 2449692 00:24:24.175 21:20:00 -- common/autotest_common.sh@931 -- # uname 00:24:24.175 21:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:24.175 21:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2449692 00:24:24.175 21:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:24.175 21:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:24.175 21:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2449692' 00:24:24.175 killing process with pid 2449692 00:24:24.175 21:20:00 -- common/autotest_common.sh@945 -- # kill 2449692 00:24:24.175 21:20:00 -- common/autotest_common.sh@950 -- # wait 2449692 00:24:24.175 21:20:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:24.175 21:20:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:24.175 21:20:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:24.175 21:20:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.175 21:20:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:24.175 21:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.175 21:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.175 21:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.746 21:20:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:24.746 00:24:24.746 real 1m14.669s 00:24:24.746 user 4m37.186s 00:24:24.746 sys 0m6.413s 00:24:24.746 21:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.746 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:24:24.746 ************************************ 00:24:24.746 END TEST nvmf_initiator_timeout 00:24:24.746 ************************************ 00:24:24.746 21:20:02 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:24.746 21:20:02 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:24.746 21:20:02 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:24.746 21:20:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:24.746 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:24:31.423 21:20:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:31.423 21:20:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:31.423 21:20:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:31.423 21:20:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:31.423 21:20:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:31.423 21:20:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:31.423 21:20:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:31.423 21:20:09 -- nvmf/common.sh@294 -- # net_devs=() 00:24:31.423 21:20:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:31.423 21:20:09 -- nvmf/common.sh@295 -- # e810=() 00:24:31.423 21:20:09 -- nvmf/common.sh@295 -- # local -ga e810 00:24:31.423 21:20:09 -- nvmf/common.sh@296 -- # x722=() 00:24:31.423 21:20:09 -- nvmf/common.sh@296 -- # local -ga x722 00:24:31.423 21:20:09 -- nvmf/common.sh@297 -- # mlx=() 00:24:31.423 21:20:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:31.423 21:20:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.423 21:20:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:31.423 21:20:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:31.423 21:20:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:31.423 21:20:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:31.423 21:20:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:31.423 21:20:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:31.424 21:20:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:31.424 21:20:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.424 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.424 21:20:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:31.424 21:20:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.424 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.424 21:20:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:31.424 21:20:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:31.424 21:20:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.424 21:20:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:31.424 21:20:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.424 21:20:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.424 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.424 21:20:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.424 21:20:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:31.424 21:20:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.424 21:20:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:31.424 21:20:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.424 21:20:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.424 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.424 21:20:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.424 21:20:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:31.424 21:20:09 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.424 21:20:09 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:31.424 21:20:09 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:31.424 21:20:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:31.424 21:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:31.424 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:24:31.424 ************************************ 00:24:31.424 START TEST nvmf_perf_adq 00:24:31.424 ************************************ 00:24:31.424 21:20:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:31.424 * Looking for test storage... 00:24:31.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.424 21:20:09 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.424 21:20:09 -- nvmf/common.sh@7 -- # uname -s 00:24:31.424 21:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.424 21:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.424 21:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.424 21:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.424 21:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.424 21:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.424 21:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.424 21:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.424 21:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.424 21:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.424 21:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.424 21:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.424 21:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.424 21:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.424 21:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.424 21:20:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.424 21:20:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.424 21:20:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.424 21:20:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.424 21:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.424 21:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.424 21:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.424 21:20:09 -- paths/export.sh@5 -- # export PATH 00:24:31.424 21:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.424 21:20:09 -- nvmf/common.sh@46 -- # : 0 00:24:31.424 21:20:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:31.424 21:20:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:31.424 21:20:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:31.424 21:20:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.424 21:20:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.424 21:20:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:31.424 21:20:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:31.424 21:20:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:31.424 21:20:09 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:31.424 21:20:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:31.424 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:24:38.014 21:20:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:38.014 21:20:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:38.014 21:20:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:38.014 21:20:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:38.014 21:20:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:38.014 21:20:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:38.014 21:20:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:38.014 21:20:16 -- nvmf/common.sh@294 -- # net_devs=() 00:24:38.014 21:20:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:38.014 21:20:16 -- nvmf/common.sh@295 -- # e810=() 00:24:38.014 21:20:16 -- nvmf/common.sh@295 -- # local -ga e810 00:24:38.014 21:20:16 -- nvmf/common.sh@296 -- # x722=() 00:24:38.014 21:20:16 -- nvmf/common.sh@296 -- # local -ga x722 00:24:38.014 21:20:16 -- nvmf/common.sh@297 -- # mlx=() 00:24:38.014 21:20:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:38.014 21:20:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.014 21:20:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:38.014 21:20:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:38.014 21:20:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:38.014 21:20:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:38.014 21:20:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.014 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.014 21:20:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:38.014 21:20:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.014 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.014 21:20:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:38.014 21:20:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:38.014 21:20:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:38.014 21:20:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.014 21:20:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:38.014 21:20:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.014 21:20:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.014 21:20:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.014 21:20:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:38.014 21:20:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.014 21:20:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:38.014 21:20:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.014 21:20:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.014 21:20:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.014 21:20:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:38.014 21:20:16 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.014 21:20:16 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:38.014 21:20:16 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:38.014 21:20:16 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:38.014 21:20:16 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:39.928 21:20:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:41.839 21:20:19 -- target/perf_adq.sh@54 -- # sleep 5 00:24:47.124 21:20:24 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:47.124 21:20:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:47.124 21:20:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.124 21:20:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:47.124 21:20:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:47.124 21:20:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:47.124 21:20:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.124 21:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.124 21:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.124 21:20:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:47.124 21:20:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:47.124 21:20:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:47.124 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:47.124 21:20:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:47.124 21:20:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:47.124 21:20:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:47.124 21:20:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:47.124 21:20:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:47.124 21:20:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:47.124 21:20:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:47.124 21:20:24 -- nvmf/common.sh@294 -- # net_devs=() 00:24:47.124 21:20:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:47.124 21:20:24 -- nvmf/common.sh@295 -- # e810=() 00:24:47.124 21:20:24 -- nvmf/common.sh@295 -- # local -ga e810 00:24:47.124 21:20:24 -- nvmf/common.sh@296 -- # x722=() 00:24:47.124 21:20:24 -- nvmf/common.sh@296 -- # local -ga x722 00:24:47.124 21:20:24 -- nvmf/common.sh@297 -- # mlx=() 00:24:47.124 21:20:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:47.124 21:20:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.124 21:20:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.125 21:20:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.125 21:20:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:47.125 21:20:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:47.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:47.125 21:20:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:47.125 21:20:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:47.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:47.125 21:20:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:47.125 21:20:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.125 21:20:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.125 21:20:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:47.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:47.125 21:20:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:47.125 21:20:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.125 21:20:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.125 21:20:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:47.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:47.125 21:20:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:47.125 21:20:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:47.125 21:20:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.125 21:20:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.125 21:20:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:47.125 21:20:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.125 21:20:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.125 21:20:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:47.125 21:20:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.125 21:20:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.125 21:20:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:47.125 21:20:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:47.125 21:20:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.125 21:20:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.125 21:20:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.125 21:20:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.125 21:20:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:47.125 21:20:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.125 21:20:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.125 21:20:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.125 21:20:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:47.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:24:47.125 00:24:47.125 --- 10.0.0.2 ping statistics --- 00:24:47.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.125 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:24:47.125 21:20:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:24:47.125 00:24:47.125 --- 10.0.0.1 ping statistics --- 00:24:47.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.125 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:24:47.125 21:20:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.125 21:20:24 -- nvmf/common.sh@410 -- # return 0 00:24:47.125 21:20:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:47.125 21:20:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.125 21:20:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:47.125 21:20:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.125 21:20:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:47.125 21:20:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:47.125 21:20:24 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:47.125 21:20:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:47.125 21:20:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:47.125 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:47.125 21:20:24 -- nvmf/common.sh@469 -- # nvmfpid=2472069 00:24:47.125 21:20:24 -- nvmf/common.sh@470 -- # waitforlisten 2472069 00:24:47.125 21:20:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:47.125 21:20:24 -- common/autotest_common.sh@819 -- # '[' -z 2472069 ']' 00:24:47.125 21:20:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.125 21:20:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:47.125 21:20:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.125 21:20:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:47.125 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:24:47.125 [2024-06-08 21:20:25.003957] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:47.125 [2024-06-08 21:20:25.004023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.125 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.125 [2024-06-08 21:20:25.074227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.125 [2024-06-08 21:20:25.146725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:47.125 [2024-06-08 21:20:25.146856] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.125 [2024-06-08 21:20:25.146867] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.125 [2024-06-08 21:20:25.146875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.125 [2024-06-08 21:20:25.147017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.125 [2024-06-08 21:20:25.147117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.125 [2024-06-08 21:20:25.147277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.125 [2024-06-08 21:20:25.147278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.695 21:20:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:47.695 21:20:25 -- common/autotest_common.sh@852 -- # return 0 00:24:47.695 21:20:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:47.695 21:20:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:47.695 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 21:20:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.957 21:20:25 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:47.957 21:20:25 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 [2024-06-08 21:20:25.925335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 Malloc1 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.957 21:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.957 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.957 [2024-06-08 21:20:25.984646] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.957 21:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.957 21:20:25 -- target/perf_adq.sh@73 -- # perfpid=2472389 00:24:47.957 21:20:25 -- target/perf_adq.sh@74 -- # sleep 2 00:24:47.957 21:20:25 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:47.957 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.501 21:20:27 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:50.501 21:20:27 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:50.501 21:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.501 21:20:27 -- target/perf_adq.sh@76 -- # wc -l 00:24:50.501 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:24:50.501 21:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.501 21:20:28 -- target/perf_adq.sh@76 -- # count=4 00:24:50.501 21:20:28 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:50.501 21:20:28 -- target/perf_adq.sh@81 -- # wait 2472389 00:24:58.637 Initializing NVMe Controllers 00:24:58.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:58.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:58.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:58.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:58.637 Initialization complete. Launching workers. 00:24:58.637 ======================================================== 00:24:58.637 Latency(us) 00:24:58.637 Device Information : IOPS MiB/s Average min max 00:24:58.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14055.62 54.90 4553.61 1583.45 8978.27 00:24:58.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13000.03 50.78 4938.80 1671.57 45696.34 00:24:58.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 16328.31 63.78 3919.67 840.77 49952.99 00:24:58.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11566.24 45.18 5543.77 1533.48 44957.17 00:24:58.637 ======================================================== 00:24:58.637 Total : 54950.20 214.65 4664.78 840.77 49952.99 00:24:58.637 00:24:58.637 21:20:36 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:58.637 21:20:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:58.637 21:20:36 -- nvmf/common.sh@116 -- # sync 00:24:58.637 21:20:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:58.637 21:20:36 -- nvmf/common.sh@119 -- # set +e 00:24:58.637 21:20:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:58.637 21:20:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:58.637 rmmod nvme_tcp 00:24:58.637 rmmod nvme_fabrics 00:24:58.637 rmmod nvme_keyring 00:24:58.637 21:20:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:58.637 21:20:36 -- nvmf/common.sh@123 -- # set -e 00:24:58.637 21:20:36 -- nvmf/common.sh@124 -- # return 0 00:24:58.637 21:20:36 -- nvmf/common.sh@477 -- # '[' -n 2472069 ']' 00:24:58.637 21:20:36 -- nvmf/common.sh@478 -- # killprocess 2472069 00:24:58.637 21:20:36 -- common/autotest_common.sh@926 -- # '[' -z 2472069 ']' 00:24:58.637 21:20:36 -- common/autotest_common.sh@930 -- # kill -0 2472069 00:24:58.637 21:20:36 -- common/autotest_common.sh@931 -- # uname 00:24:58.637 21:20:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:58.637 21:20:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2472069 00:24:58.637 21:20:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:58.637 21:20:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:58.637 21:20:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2472069' 00:24:58.637 killing process with pid 2472069 00:24:58.637 21:20:36 -- common/autotest_common.sh@945 -- # kill 2472069 00:24:58.637 21:20:36 -- common/autotest_common.sh@950 -- # wait 2472069 00:24:58.637 21:20:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:58.637 21:20:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:58.637 21:20:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:58.637 21:20:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.637 21:20:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:58.638 21:20:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.638 21:20:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.638 21:20:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.561 21:20:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:00.561 21:20:38 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:00.561 21:20:38 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:02.473 21:20:40 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:04.385 21:20:42 -- target/perf_adq.sh@54 -- # sleep 5 00:25:09.668 21:20:47 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:09.668 21:20:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.668 21:20:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.668 21:20:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.668 21:20:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.668 21:20:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.668 21:20:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.668 21:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.668 21:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.668 21:20:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.668 21:20:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.668 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:09.668 21:20:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:09.668 21:20:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:09.668 21:20:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:09.668 21:20:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:09.668 21:20:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:09.668 21:20:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:09.668 21:20:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:09.668 21:20:47 -- nvmf/common.sh@294 -- # net_devs=() 00:25:09.668 21:20:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:09.668 21:20:47 -- nvmf/common.sh@295 -- # e810=() 00:25:09.668 21:20:47 -- nvmf/common.sh@295 -- # local -ga e810 00:25:09.668 21:20:47 -- nvmf/common.sh@296 -- # x722=() 00:25:09.668 21:20:47 -- nvmf/common.sh@296 -- # local -ga x722 00:25:09.668 21:20:47 -- nvmf/common.sh@297 -- # mlx=() 00:25:09.668 21:20:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:09.668 21:20:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.668 21:20:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:09.668 21:20:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:09.668 21:20:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:09.668 21:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.668 21:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:09.668 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:09.668 21:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.668 21:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:09.668 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:09.668 21:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:09.668 21:20:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:09.668 21:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.668 21:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.668 21:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.668 21:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.668 21:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:09.669 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:09.669 21:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.669 21:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.669 21:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.669 21:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.669 21:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.669 21:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:09.669 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:09.669 21:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.669 21:20:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:09.669 21:20:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:09.669 21:20:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:09.669 21:20:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:09.669 21:20:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:09.669 21:20:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.669 21:20:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.669 21:20:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.669 21:20:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:09.669 21:20:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.669 21:20:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.669 21:20:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:09.669 21:20:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.669 21:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.669 21:20:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:09.669 21:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:09.669 21:20:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.669 21:20:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.669 21:20:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.669 21:20:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.669 21:20:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:09.669 21:20:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.669 21:20:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.669 21:20:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.669 21:20:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:09.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:25:09.669 00:25:09.669 --- 10.0.0.2 ping statistics --- 00:25:09.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.669 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:25:09.669 21:20:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:25:09.669 00:25:09.669 --- 10.0.0.1 ping statistics --- 00:25:09.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.669 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:25:09.669 21:20:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.669 21:20:47 -- nvmf/common.sh@410 -- # return 0 00:25:09.669 21:20:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:09.669 21:20:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.669 21:20:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:09.669 21:20:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:09.669 21:20:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.669 21:20:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:09.669 21:20:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:09.669 21:20:47 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:09.669 21:20:47 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:09.669 21:20:47 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:09.669 21:20:47 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:09.669 net.core.busy_poll = 1 00:25:09.669 21:20:47 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:09.669 net.core.busy_read = 1 00:25:09.669 21:20:47 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:09.669 21:20:47 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:09.669 21:20:47 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:09.669 21:20:47 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:09.669 21:20:47 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:09.669 21:20:47 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:09.669 21:20:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:09.669 21:20:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.669 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:09.669 21:20:47 -- nvmf/common.sh@469 -- # nvmfpid=2476947 00:25:09.669 21:20:47 -- nvmf/common.sh@470 -- # waitforlisten 2476947 00:25:09.669 21:20:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:09.669 21:20:47 -- common/autotest_common.sh@819 -- # '[' -z 2476947 ']' 00:25:09.669 21:20:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.669 21:20:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:09.669 21:20:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.669 21:20:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:09.669 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:09.669 [2024-06-08 21:20:47.735324] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:09.669 [2024-06-08 21:20:47.735376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.930 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.930 [2024-06-08 21:20:47.801024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.930 [2024-06-08 21:20:47.863418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:09.930 [2024-06-08 21:20:47.863556] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.930 [2024-06-08 21:20:47.863566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.930 [2024-06-08 21:20:47.863574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.930 [2024-06-08 21:20:47.863715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.930 [2024-06-08 21:20:47.863852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.930 [2024-06-08 21:20:47.864006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.930 [2024-06-08 21:20:47.864007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.502 21:20:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:10.502 21:20:48 -- common/autotest_common.sh@852 -- # return 0 00:25:10.502 21:20:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:10.502 21:20:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:10.502 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.502 21:20:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.502 21:20:48 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:10.502 21:20:48 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:10.502 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.502 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.502 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.502 21:20:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:10.502 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.502 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:10.763 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.763 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 [2024-06-08 21:20:48.644368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:10.763 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.763 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 Malloc1 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.763 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.763 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:10.763 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.763 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.763 21:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.763 21:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:10.763 [2024-06-08 21:20:48.696903] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.763 21:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.763 21:20:48 -- target/perf_adq.sh@94 -- # perfpid=2477231 00:25:10.763 21:20:48 -- target/perf_adq.sh@95 -- # sleep 2 00:25:10.763 21:20:48 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:10.763 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.674 21:20:50 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:12.674 21:20:50 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:12.674 21:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.674 21:20:50 -- target/perf_adq.sh@97 -- # wc -l 00:25:12.674 21:20:50 -- common/autotest_common.sh@10 -- # set +x 00:25:12.674 21:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.674 21:20:50 -- target/perf_adq.sh@97 -- # count=2 00:25:12.674 21:20:50 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:12.674 21:20:50 -- target/perf_adq.sh@103 -- # wait 2477231 00:25:20.811 Initializing NVMe Controllers 00:25:20.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:20.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:20.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:20.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:20.811 Initialization complete. Launching workers. 00:25:20.811 ======================================================== 00:25:20.811 Latency(us) 00:25:20.811 Device Information : IOPS MiB/s Average min max 00:25:20.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11251.50 43.95 5688.76 1422.92 48724.89 00:25:20.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10622.30 41.49 6025.06 1434.48 48541.85 00:25:20.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10435.10 40.76 6133.64 1354.58 52741.16 00:25:20.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10349.60 40.43 6203.15 1254.54 50130.75 00:25:20.812 ======================================================== 00:25:20.812 Total : 42658.49 166.63 6006.13 1254.54 52741.16 00:25:20.812 00:25:20.812 21:20:58 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:20.812 21:20:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:20.812 21:20:58 -- nvmf/common.sh@116 -- # sync 00:25:20.812 21:20:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:20.812 21:20:58 -- nvmf/common.sh@119 -- # set +e 00:25:20.812 21:20:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:20.812 21:20:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:21.072 rmmod nvme_tcp 00:25:21.072 rmmod nvme_fabrics 00:25:21.072 rmmod nvme_keyring 00:25:21.072 21:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:21.072 21:20:58 -- nvmf/common.sh@123 -- # set -e 00:25:21.072 21:20:58 -- nvmf/common.sh@124 -- # return 0 00:25:21.072 21:20:58 -- nvmf/common.sh@477 -- # '[' -n 2476947 ']' 00:25:21.072 21:20:58 -- nvmf/common.sh@478 -- # killprocess 2476947 00:25:21.072 21:20:58 -- common/autotest_common.sh@926 -- # '[' -z 2476947 ']' 00:25:21.072 21:20:58 -- common/autotest_common.sh@930 -- # kill -0 2476947 00:25:21.072 21:20:58 -- common/autotest_common.sh@931 -- # uname 00:25:21.072 21:20:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:21.072 21:20:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2476947 00:25:21.072 21:20:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:21.072 21:20:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:21.073 21:20:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2476947' 00:25:21.073 killing process with pid 2476947 00:25:21.073 21:20:59 -- common/autotest_common.sh@945 -- # kill 2476947 00:25:21.073 21:20:59 -- common/autotest_common.sh@950 -- # wait 2476947 00:25:21.333 21:20:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:21.333 21:20:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:21.333 21:20:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:21.333 21:20:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.333 21:20:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:21.333 21:20:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.333 21:20:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.333 21:20:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.635 21:21:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:24.635 21:21:02 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:24.635 00:25:24.635 real 0m53.077s 00:25:24.635 user 2m46.944s 00:25:24.635 sys 0m11.721s 00:25:24.635 21:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.635 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:24.635 ************************************ 00:25:24.635 END TEST nvmf_perf_adq 00:25:24.635 ************************************ 00:25:24.635 21:21:02 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:24.635 21:21:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:24.635 21:21:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.635 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:24.635 ************************************ 00:25:24.635 START TEST nvmf_shutdown 00:25:24.635 ************************************ 00:25:24.635 21:21:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:24.635 * Looking for test storage... 00:25:24.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:24.635 21:21:02 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.635 21:21:02 -- nvmf/common.sh@7 -- # uname -s 00:25:24.635 21:21:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.635 21:21:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.635 21:21:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.635 21:21:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.635 21:21:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.635 21:21:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.635 21:21:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.635 21:21:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.635 21:21:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.635 21:21:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.635 21:21:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.635 21:21:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.635 21:21:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.635 21:21:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.635 21:21:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.635 21:21:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.635 21:21:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.635 21:21:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.635 21:21:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.635 21:21:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.635 21:21:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.636 21:21:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.636 21:21:02 -- paths/export.sh@5 -- # export PATH 00:25:24.636 21:21:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.636 21:21:02 -- nvmf/common.sh@46 -- # : 0 00:25:24.636 21:21:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:24.636 21:21:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:24.636 21:21:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:24.636 21:21:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.636 21:21:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.636 21:21:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:24.636 21:21:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:24.636 21:21:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:24.636 21:21:02 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.636 21:21:02 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.636 21:21:02 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:24.636 21:21:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.636 21:21:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.636 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:24.636 ************************************ 00:25:24.636 START TEST nvmf_shutdown_tc1 00:25:24.636 ************************************ 00:25:24.636 21:21:02 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:24.636 21:21:02 -- target/shutdown.sh@74 -- # starttarget 00:25:24.636 21:21:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:24.636 21:21:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:24.636 21:21:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.636 21:21:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:24.636 21:21:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:24.636 21:21:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:24.636 21:21:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.636 21:21:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.636 21:21:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.636 21:21:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:24.636 21:21:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:24.636 21:21:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:24.636 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:31.283 21:21:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:31.283 21:21:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:31.283 21:21:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:31.283 21:21:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:31.283 21:21:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:31.283 21:21:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:31.283 21:21:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:31.283 21:21:09 -- nvmf/common.sh@294 -- # net_devs=() 00:25:31.283 21:21:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:31.283 21:21:09 -- nvmf/common.sh@295 -- # e810=() 00:25:31.283 21:21:09 -- nvmf/common.sh@295 -- # local -ga e810 00:25:31.283 21:21:09 -- nvmf/common.sh@296 -- # x722=() 00:25:31.283 21:21:09 -- nvmf/common.sh@296 -- # local -ga x722 00:25:31.283 21:21:09 -- nvmf/common.sh@297 -- # mlx=() 00:25:31.283 21:21:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:31.283 21:21:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.283 21:21:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:31.283 21:21:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:31.283 21:21:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:31.283 21:21:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.283 21:21:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:31.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:31.283 21:21:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.283 21:21:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:31.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:31.283 21:21:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:31.283 21:21:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:31.283 21:21:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.283 21:21:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.283 21:21:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.283 21:21:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.283 21:21:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:31.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:31.284 21:21:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.284 21:21:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.284 21:21:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.284 21:21:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.284 21:21:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.284 21:21:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:31.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:31.284 21:21:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.284 21:21:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:31.284 21:21:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:31.284 21:21:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:31.284 21:21:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:31.284 21:21:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:31.284 21:21:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.284 21:21:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.284 21:21:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.284 21:21:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:31.284 21:21:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.284 21:21:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.284 21:21:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:31.284 21:21:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.284 21:21:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.284 21:21:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:31.284 21:21:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:31.284 21:21:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.284 21:21:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.546 21:21:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.546 21:21:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.546 21:21:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:31.546 21:21:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.546 21:21:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.546 21:21:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.546 21:21:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:31.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:25:31.546 00:25:31.546 --- 10.0.0.2 ping statistics --- 00:25:31.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.546 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:25:31.546 21:21:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:25:31.546 00:25:31.546 --- 10.0.0.1 ping statistics --- 00:25:31.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.546 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:25:31.546 21:21:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.546 21:21:09 -- nvmf/common.sh@410 -- # return 0 00:25:31.546 21:21:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:31.546 21:21:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.546 21:21:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:31.546 21:21:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:31.546 21:21:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.546 21:21:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:31.546 21:21:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:31.808 21:21:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:31.808 21:21:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:31.808 21:21:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.808 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:25:31.808 21:21:09 -- nvmf/common.sh@469 -- # nvmfpid=2483924 00:25:31.808 21:21:09 -- nvmf/common.sh@470 -- # waitforlisten 2483924 00:25:31.808 21:21:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:31.808 21:21:09 -- common/autotest_common.sh@819 -- # '[' -z 2483924 ']' 00:25:31.808 21:21:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.808 21:21:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.808 21:21:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.808 21:21:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.808 21:21:09 -- common/autotest_common.sh@10 -- # set +x 00:25:31.808 [2024-06-08 21:21:09.737100] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:31.808 [2024-06-08 21:21:09.737169] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.808 [2024-06-08 21:21:09.827193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.069 [2024-06-08 21:21:09.919780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.069 [2024-06-08 21:21:09.919951] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.069 [2024-06-08 21:21:09.919961] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.069 [2024-06-08 21:21:09.919968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.069 [2024-06-08 21:21:09.920102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.069 [2024-06-08 21:21:09.920269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.069 [2024-06-08 21:21:09.920451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:32.069 [2024-06-08 21:21:09.920520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.643 21:21:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:32.643 21:21:10 -- common/autotest_common.sh@852 -- # return 0 00:25:32.643 21:21:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:32.643 21:21:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.643 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:32.643 21:21:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.643 21:21:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.643 21:21:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.643 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:32.643 [2024-06-08 21:21:10.558377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.643 21:21:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.643 21:21:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:32.643 21:21:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:32.643 21:21:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.643 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:32.643 21:21:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:32.643 21:21:10 -- target/shutdown.sh@28 -- # cat 00:25:32.643 21:21:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:32.643 21:21:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.643 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:25:32.643 Malloc1 00:25:32.643 [2024-06-08 21:21:10.661571] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.643 Malloc2 00:25:32.643 Malloc3 00:25:32.904 Malloc4 00:25:32.904 Malloc5 00:25:32.904 Malloc6 00:25:32.904 Malloc7 00:25:32.904 Malloc8 00:25:32.904 Malloc9 00:25:33.166 Malloc10 00:25:33.166 21:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.166 21:21:11 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:33.166 21:21:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.166 21:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:33.166 21:21:11 -- target/shutdown.sh@78 -- # perfpid=2484390 00:25:33.166 21:21:11 -- target/shutdown.sh@79 -- # waitforlisten 2484390 /var/tmp/bdevperf.sock 00:25:33.166 21:21:11 -- common/autotest_common.sh@819 -- # '[' -z 2484390 ']' 00:25:33.166 21:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.166 21:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:33.166 21:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.166 21:21:11 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:33.166 21:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:33.166 21:21:11 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:33.166 21:21:11 -- common/autotest_common.sh@10 -- # set +x 00:25:33.166 21:21:11 -- nvmf/common.sh@520 -- # config=() 00:25:33.166 21:21:11 -- nvmf/common.sh@520 -- # local subsystem config 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 [2024-06-08 21:21:11.106344] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:33.166 [2024-06-08 21:21:11.106399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.166 EOF 00:25:33.166 )") 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.166 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.166 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.166 { 00:25:33.166 "params": { 00:25:33.166 "name": "Nvme$subsystem", 00:25:33.166 "trtype": "$TEST_TRANSPORT", 00:25:33.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.166 "adrfam": "ipv4", 00:25:33.166 "trsvcid": "$NVMF_PORT", 00:25:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.166 "hdgst": ${hdgst:-false}, 00:25:33.166 "ddgst": ${ddgst:-false} 00:25:33.166 }, 00:25:33.166 "method": "bdev_nvme_attach_controller" 00:25:33.166 } 00:25:33.167 EOF 00:25:33.167 )") 00:25:33.167 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.167 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.167 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.167 { 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme$subsystem", 00:25:33.167 "trtype": "$TEST_TRANSPORT", 00:25:33.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "$NVMF_PORT", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.167 "hdgst": ${hdgst:-false}, 00:25:33.167 "ddgst": ${ddgst:-false} 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 } 00:25:33.167 EOF 00:25:33.167 )") 00:25:33.167 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.167 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.167 21:21:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.167 21:21:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.167 { 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme$subsystem", 00:25:33.167 "trtype": "$TEST_TRANSPORT", 00:25:33.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "$NVMF_PORT", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.167 "hdgst": ${hdgst:-false}, 00:25:33.167 "ddgst": ${ddgst:-false} 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 } 00:25:33.167 EOF 00:25:33.167 )") 00:25:33.167 21:21:11 -- nvmf/common.sh@542 -- # cat 00:25:33.167 21:21:11 -- nvmf/common.sh@544 -- # jq . 00:25:33.167 21:21:11 -- nvmf/common.sh@545 -- # IFS=, 00:25:33.167 21:21:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme1", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme2", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme3", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme4", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme5", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme6", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme7", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme8", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme9", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 },{ 00:25:33.167 "params": { 00:25:33.167 "name": "Nvme10", 00:25:33.167 "trtype": "tcp", 00:25:33.167 "traddr": "10.0.0.2", 00:25:33.167 "adrfam": "ipv4", 00:25:33.167 "trsvcid": "4420", 00:25:33.167 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:33.167 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:33.167 "hdgst": false, 00:25:33.167 "ddgst": false 00:25:33.167 }, 00:25:33.167 "method": "bdev_nvme_attach_controller" 00:25:33.167 }' 00:25:33.167 [2024-06-08 21:21:11.166541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.167 [2024-06-08 21:21:11.229275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.553 21:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.553 21:21:12 -- common/autotest_common.sh@852 -- # return 0 00:25:34.554 21:21:12 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:34.554 21:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.554 21:21:12 -- common/autotest_common.sh@10 -- # set +x 00:25:34.554 21:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.554 21:21:12 -- target/shutdown.sh@83 -- # kill -9 2484390 00:25:34.554 21:21:12 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:34.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2484390 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:34.554 21:21:12 -- target/shutdown.sh@87 -- # sleep 1 00:25:35.496 21:21:13 -- target/shutdown.sh@88 -- # kill -0 2483924 00:25:35.497 21:21:13 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:35.497 21:21:13 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:35.497 21:21:13 -- nvmf/common.sh@520 -- # config=() 00:25:35.497 21:21:13 -- nvmf/common.sh@520 -- # local subsystem config 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.497 { 00:25:35.497 "params": { 00:25:35.497 "name": "Nvme$subsystem", 00:25:35.497 "trtype": "$TEST_TRANSPORT", 00:25:35.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.497 "adrfam": "ipv4", 00:25:35.497 "trsvcid": "$NVMF_PORT", 00:25:35.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.497 "hdgst": ${hdgst:-false}, 00:25:35.497 "ddgst": ${ddgst:-false} 00:25:35.497 }, 00:25:35.497 "method": "bdev_nvme_attach_controller" 00:25:35.497 } 00:25:35.497 EOF 00:25:35.497 )") 00:25:35.497 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.497 [2024-06-08 21:21:13.587000] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:35.497 [2024-06-08 21:21:13.587053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485081 ] 00:25:35.758 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.758 { 00:25:35.758 "params": { 00:25:35.758 "name": "Nvme$subsystem", 00:25:35.758 "trtype": "$TEST_TRANSPORT", 00:25:35.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.758 "adrfam": "ipv4", 00:25:35.758 "trsvcid": "$NVMF_PORT", 00:25:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.758 "hdgst": ${hdgst:-false}, 00:25:35.758 "ddgst": ${ddgst:-false} 00:25:35.758 }, 00:25:35.758 "method": "bdev_nvme_attach_controller" 00:25:35.758 } 00:25:35.758 EOF 00:25:35.758 )") 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.758 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.758 { 00:25:35.758 "params": { 00:25:35.758 "name": "Nvme$subsystem", 00:25:35.758 "trtype": "$TEST_TRANSPORT", 00:25:35.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.758 "adrfam": "ipv4", 00:25:35.758 "trsvcid": "$NVMF_PORT", 00:25:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.758 "hdgst": ${hdgst:-false}, 00:25:35.758 "ddgst": ${ddgst:-false} 00:25:35.758 }, 00:25:35.758 "method": "bdev_nvme_attach_controller" 00:25:35.758 } 00:25:35.758 EOF 00:25:35.758 )") 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.758 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.758 { 00:25:35.758 "params": { 00:25:35.758 "name": "Nvme$subsystem", 00:25:35.758 "trtype": "$TEST_TRANSPORT", 00:25:35.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.758 "adrfam": "ipv4", 00:25:35.758 "trsvcid": "$NVMF_PORT", 00:25:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.758 "hdgst": ${hdgst:-false}, 00:25:35.758 "ddgst": ${ddgst:-false} 00:25:35.758 }, 00:25:35.758 "method": "bdev_nvme_attach_controller" 00:25:35.758 } 00:25:35.758 EOF 00:25:35.758 )") 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.758 21:21:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.758 { 00:25:35.758 "params": { 00:25:35.758 "name": "Nvme$subsystem", 00:25:35.758 "trtype": "$TEST_TRANSPORT", 00:25:35.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.758 "adrfam": "ipv4", 00:25:35.758 "trsvcid": "$NVMF_PORT", 00:25:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.758 "hdgst": ${hdgst:-false}, 00:25:35.758 "ddgst": ${ddgst:-false} 00:25:35.758 }, 00:25:35.758 "method": "bdev_nvme_attach_controller" 00:25:35.758 } 00:25:35.758 EOF 00:25:35.758 )") 00:25:35.758 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.758 21:21:13 -- nvmf/common.sh@542 -- # cat 00:25:35.758 21:21:13 -- nvmf/common.sh@544 -- # jq . 00:25:35.758 21:21:13 -- nvmf/common.sh@545 -- # IFS=, 00:25:35.758 21:21:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:35.758 "params": { 00:25:35.759 "name": "Nvme1", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme2", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme3", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme4", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme5", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme6", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme7", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme8", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme9", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 },{ 00:25:35.759 "params": { 00:25:35.759 "name": "Nvme10", 00:25:35.759 "trtype": "tcp", 00:25:35.759 "traddr": "10.0.0.2", 00:25:35.759 "adrfam": "ipv4", 00:25:35.759 "trsvcid": "4420", 00:25:35.759 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:35.759 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:35.759 "hdgst": false, 00:25:35.759 "ddgst": false 00:25:35.759 }, 00:25:35.759 "method": "bdev_nvme_attach_controller" 00:25:35.759 }' 00:25:35.759 [2024-06-08 21:21:13.647431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.759 [2024-06-08 21:21:13.710233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.143 Running I/O for 1 seconds... 00:25:38.528 00:25:38.528 Latency(us) 00:25:38.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.528 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme1n1 : 1.12 355.17 22.20 0.00 0.00 166854.44 65972.91 170393.60 00:25:38.528 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme2n1 : 1.07 452.29 28.27 0.00 0.00 136491.30 12014.93 108789.76 00:25:38.528 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme3n1 : 1.08 446.20 27.89 0.00 0.00 137473.41 37137.07 110100.48 00:25:38.528 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme4n1 : 1.10 473.87 29.62 0.00 0.00 130248.56 11687.25 111411.20 00:25:38.528 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme5n1 : 1.10 363.62 22.73 0.00 0.00 168586.44 13817.17 174762.67 00:25:38.528 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme6n1 : 1.11 358.95 22.43 0.00 0.00 169132.28 16165.55 157286.40 00:25:38.528 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme7n1 : 1.11 474.15 29.63 0.00 0.00 127423.10 10704.21 108789.76 00:25:38.528 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme8n1 : 1.14 423.13 26.45 0.00 0.00 136698.64 12615.68 116217.17 00:25:38.528 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme9n1 : 1.12 434.38 27.15 0.00 0.00 137057.57 10376.53 119712.43 00:25:38.528 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:38.528 Verification LBA range: start 0x0 length 0x400 00:25:38.528 Nvme10n1 : 1.11 476.13 29.76 0.00 0.00 124221.38 8574.29 109663.57 00:25:38.528 =================================================================================================================== 00:25:38.528 Total : 4257.89 266.12 0.00 0.00 141640.04 8574.29 174762.67 00:25:38.528 21:21:16 -- target/shutdown.sh@93 -- # stoptarget 00:25:38.528 21:21:16 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:38.528 21:21:16 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:38.528 21:21:16 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:38.528 21:21:16 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:38.528 21:21:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:38.528 21:21:16 -- nvmf/common.sh@116 -- # sync 00:25:38.528 21:21:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:38.528 21:21:16 -- nvmf/common.sh@119 -- # set +e 00:25:38.528 21:21:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:38.528 21:21:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:38.528 rmmod nvme_tcp 00:25:38.528 rmmod nvme_fabrics 00:25:38.528 rmmod nvme_keyring 00:25:38.528 21:21:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:38.528 21:21:16 -- nvmf/common.sh@123 -- # set -e 00:25:38.528 21:21:16 -- nvmf/common.sh@124 -- # return 0 00:25:38.528 21:21:16 -- nvmf/common.sh@477 -- # '[' -n 2483924 ']' 00:25:38.528 21:21:16 -- nvmf/common.sh@478 -- # killprocess 2483924 00:25:38.528 21:21:16 -- common/autotest_common.sh@926 -- # '[' -z 2483924 ']' 00:25:38.528 21:21:16 -- common/autotest_common.sh@930 -- # kill -0 2483924 00:25:38.528 21:21:16 -- common/autotest_common.sh@931 -- # uname 00:25:38.528 21:21:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.528 21:21:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2483924 00:25:38.790 21:21:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:38.790 21:21:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:38.790 21:21:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2483924' 00:25:38.790 killing process with pid 2483924 00:25:38.790 21:21:16 -- common/autotest_common.sh@945 -- # kill 2483924 00:25:38.790 21:21:16 -- common/autotest_common.sh@950 -- # wait 2483924 00:25:39.050 21:21:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:39.050 21:21:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:39.050 21:21:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:39.050 21:21:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.050 21:21:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:39.050 21:21:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.050 21:21:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.050 21:21:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.965 21:21:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:40.966 00:25:40.966 real 0m16.505s 00:25:40.966 user 0m34.112s 00:25:40.966 sys 0m6.437s 00:25:40.966 21:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.966 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:40.966 ************************************ 00:25:40.966 END TEST nvmf_shutdown_tc1 00:25:40.966 ************************************ 00:25:40.966 21:21:18 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:40.966 21:21:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.966 21:21:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.966 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:25:40.966 ************************************ 00:25:40.966 START TEST nvmf_shutdown_tc2 00:25:40.966 ************************************ 00:25:40.966 21:21:19 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:40.966 21:21:19 -- target/shutdown.sh@98 -- # starttarget 00:25:40.966 21:21:19 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:40.966 21:21:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.966 21:21:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.966 21:21:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.966 21:21:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.966 21:21:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.966 21:21:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.966 21:21:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.966 21:21:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.966 21:21:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:40.966 21:21:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:40.966 21:21:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.966 21:21:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.966 21:21:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:40.966 21:21:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:40.966 21:21:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:40.966 21:21:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:40.966 21:21:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:40.966 21:21:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:40.966 21:21:19 -- nvmf/common.sh@294 -- # net_devs=() 00:25:40.966 21:21:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:40.966 21:21:19 -- nvmf/common.sh@295 -- # e810=() 00:25:40.966 21:21:19 -- nvmf/common.sh@295 -- # local -ga e810 00:25:40.966 21:21:19 -- nvmf/common.sh@296 -- # x722=() 00:25:40.966 21:21:19 -- nvmf/common.sh@296 -- # local -ga x722 00:25:40.966 21:21:19 -- nvmf/common.sh@297 -- # mlx=() 00:25:40.966 21:21:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:40.966 21:21:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.966 21:21:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:40.966 21:21:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:40.966 21:21:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.966 21:21:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:40.966 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:40.966 21:21:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.966 21:21:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:40.966 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:40.966 21:21:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.966 21:21:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.966 21:21:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.966 21:21:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:40.966 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:40.966 21:21:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.966 21:21:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.966 21:21:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.966 21:21:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.966 21:21:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:40.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:40.966 21:21:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.966 21:21:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:40.966 21:21:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:40.966 21:21:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:40.966 21:21:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.966 21:21:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.966 21:21:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.966 21:21:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:40.966 21:21:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.966 21:21:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.966 21:21:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:40.966 21:21:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.966 21:21:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.966 21:21:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:40.966 21:21:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:40.966 21:21:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.966 21:21:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.227 21:21:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.227 21:21:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.227 21:21:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:41.227 21:21:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.227 21:21:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.489 21:21:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.489 21:21:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:41.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:25:41.489 00:25:41.489 --- 10.0.0.2 ping statistics --- 00:25:41.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.489 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:25:41.489 21:21:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:25:41.489 00:25:41.489 --- 10.0.0.1 ping statistics --- 00:25:41.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.489 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:25:41.489 21:21:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.489 21:21:19 -- nvmf/common.sh@410 -- # return 0 00:25:41.489 21:21:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:41.489 21:21:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.489 21:21:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:41.489 21:21:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:41.489 21:21:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.489 21:21:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:41.489 21:21:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:41.489 21:21:19 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:41.489 21:21:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:41.489 21:21:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:41.489 21:21:19 -- common/autotest_common.sh@10 -- # set +x 00:25:41.489 21:21:19 -- nvmf/common.sh@469 -- # nvmfpid=2486254 00:25:41.489 21:21:19 -- nvmf/common.sh@470 -- # waitforlisten 2486254 00:25:41.489 21:21:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:41.489 21:21:19 -- common/autotest_common.sh@819 -- # '[' -z 2486254 ']' 00:25:41.489 21:21:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.489 21:21:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:41.489 21:21:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.489 21:21:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:41.489 21:21:19 -- common/autotest_common.sh@10 -- # set +x 00:25:41.489 [2024-06-08 21:21:19.459811] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:41.489 [2024-06-08 21:21:19.459879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.489 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.489 [2024-06-08 21:21:19.545450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.749 [2024-06-08 21:21:19.605694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.750 [2024-06-08 21:21:19.605789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.750 [2024-06-08 21:21:19.605795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.750 [2024-06-08 21:21:19.605800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.750 [2024-06-08 21:21:19.605905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.750 [2024-06-08 21:21:19.606061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.750 [2024-06-08 21:21:19.606216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.750 [2024-06-08 21:21:19.606219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:42.322 21:21:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:42.322 21:21:20 -- common/autotest_common.sh@852 -- # return 0 00:25:42.322 21:21:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:42.322 21:21:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:42.322 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.322 21:21:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.322 21:21:20 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.322 21:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.322 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.322 [2024-06-08 21:21:20.276462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.322 21:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.322 21:21:20 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:42.322 21:21:20 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:42.322 21:21:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:42.322 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.322 21:21:20 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:42.322 21:21:20 -- target/shutdown.sh@28 -- # cat 00:25:42.322 21:21:20 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:42.322 21:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.322 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.322 Malloc1 00:25:42.322 [2024-06-08 21:21:20.374962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.322 Malloc2 00:25:42.586 Malloc3 00:25:42.586 Malloc4 00:25:42.586 Malloc5 00:25:42.586 Malloc6 00:25:42.586 Malloc7 00:25:42.586 Malloc8 00:25:42.586 Malloc9 00:25:42.848 Malloc10 00:25:42.848 21:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.848 21:21:20 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:42.848 21:21:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:42.848 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 21:21:20 -- target/shutdown.sh@102 -- # perfpid=2486640 00:25:42.848 21:21:20 -- target/shutdown.sh@103 -- # waitforlisten 2486640 /var/tmp/bdevperf.sock 00:25:42.848 21:21:20 -- common/autotest_common.sh@819 -- # '[' -z 2486640 ']' 00:25:42.848 21:21:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.848 21:21:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:42.848 21:21:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.848 21:21:20 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:42.848 21:21:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:42.848 21:21:20 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:42.848 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 21:21:20 -- nvmf/common.sh@520 -- # config=() 00:25:42.848 21:21:20 -- nvmf/common.sh@520 -- # local subsystem config 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.848 "trsvcid": "$NVMF_PORT", 00:25:42.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.848 "hdgst": ${hdgst:-false}, 00:25:42.848 "ddgst": ${ddgst:-false} 00:25:42.848 }, 00:25:42.848 "method": "bdev_nvme_attach_controller" 00:25:42.848 } 00:25:42.848 EOF 00:25:42.848 )") 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.848 "trsvcid": "$NVMF_PORT", 00:25:42.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.848 "hdgst": ${hdgst:-false}, 00:25:42.848 "ddgst": ${ddgst:-false} 00:25:42.848 }, 00:25:42.848 "method": "bdev_nvme_attach_controller" 00:25:42.848 } 00:25:42.848 EOF 00:25:42.848 )") 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.848 "trsvcid": "$NVMF_PORT", 00:25:42.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.848 "hdgst": ${hdgst:-false}, 00:25:42.848 "ddgst": ${ddgst:-false} 00:25:42.848 }, 00:25:42.848 "method": "bdev_nvme_attach_controller" 00:25:42.848 } 00:25:42.848 EOF 00:25:42.848 )") 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.848 "trsvcid": "$NVMF_PORT", 00:25:42.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.848 "hdgst": ${hdgst:-false}, 00:25:42.848 "ddgst": ${ddgst:-false} 00:25:42.848 }, 00:25:42.848 "method": "bdev_nvme_attach_controller" 00:25:42.848 } 00:25:42.848 EOF 00:25:42.848 )") 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.848 "trsvcid": "$NVMF_PORT", 00:25:42.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.848 "hdgst": ${hdgst:-false}, 00:25:42.848 "ddgst": ${ddgst:-false} 00:25:42.848 }, 00:25:42.848 "method": "bdev_nvme_attach_controller" 00:25:42.848 } 00:25:42.848 EOF 00:25:42.848 )") 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.848 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.848 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.848 { 00:25:42.848 "params": { 00:25:42.848 "name": "Nvme$subsystem", 00:25:42.848 "trtype": "$TEST_TRANSPORT", 00:25:42.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.848 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "$NVMF_PORT", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.849 "hdgst": ${hdgst:-false}, 00:25:42.849 "ddgst": ${ddgst:-false} 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 } 00:25:42.849 EOF 00:25:42.849 )") 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.849 [2024-06-08 21:21:20.811152] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:42.849 [2024-06-08 21:21:20.811205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486640 ] 00:25:42.849 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.849 { 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme$subsystem", 00:25:42.849 "trtype": "$TEST_TRANSPORT", 00:25:42.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "$NVMF_PORT", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.849 "hdgst": ${hdgst:-false}, 00:25:42.849 "ddgst": ${ddgst:-false} 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 } 00:25:42.849 EOF 00:25:42.849 )") 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.849 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.849 { 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme$subsystem", 00:25:42.849 "trtype": "$TEST_TRANSPORT", 00:25:42.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "$NVMF_PORT", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.849 "hdgst": ${hdgst:-false}, 00:25:42.849 "ddgst": ${ddgst:-false} 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 } 00:25:42.849 EOF 00:25:42.849 )") 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.849 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.849 { 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme$subsystem", 00:25:42.849 "trtype": "$TEST_TRANSPORT", 00:25:42.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "$NVMF_PORT", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.849 "hdgst": ${hdgst:-false}, 00:25:42.849 "ddgst": ${ddgst:-false} 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 } 00:25:42.849 EOF 00:25:42.849 )") 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.849 21:21:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.849 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.849 { 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme$subsystem", 00:25:42.849 "trtype": "$TEST_TRANSPORT", 00:25:42.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "$NVMF_PORT", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.849 "hdgst": ${hdgst:-false}, 00:25:42.849 "ddgst": ${ddgst:-false} 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 } 00:25:42.849 EOF 00:25:42.849 )") 00:25:42.849 21:21:20 -- nvmf/common.sh@542 -- # cat 00:25:42.849 21:21:20 -- nvmf/common.sh@544 -- # jq . 00:25:42.849 21:21:20 -- nvmf/common.sh@545 -- # IFS=, 00:25:42.849 21:21:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme1", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme2", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme3", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme4", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme5", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme6", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme7", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme8", 00:25:42.849 "trtype": "tcp", 00:25:42.849 "traddr": "10.0.0.2", 00:25:42.849 "adrfam": "ipv4", 00:25:42.849 "trsvcid": "4420", 00:25:42.849 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:42.849 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:42.849 "hdgst": false, 00:25:42.849 "ddgst": false 00:25:42.849 }, 00:25:42.849 "method": "bdev_nvme_attach_controller" 00:25:42.849 },{ 00:25:42.849 "params": { 00:25:42.849 "name": "Nvme9", 00:25:42.850 "trtype": "tcp", 00:25:42.850 "traddr": "10.0.0.2", 00:25:42.850 "adrfam": "ipv4", 00:25:42.850 "trsvcid": "4420", 00:25:42.850 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:42.850 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:42.850 "hdgst": false, 00:25:42.850 "ddgst": false 00:25:42.850 }, 00:25:42.850 "method": "bdev_nvme_attach_controller" 00:25:42.850 },{ 00:25:42.850 "params": { 00:25:42.850 "name": "Nvme10", 00:25:42.850 "trtype": "tcp", 00:25:42.850 "traddr": "10.0.0.2", 00:25:42.850 "adrfam": "ipv4", 00:25:42.850 "trsvcid": "4420", 00:25:42.850 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:42.850 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:42.850 "hdgst": false, 00:25:42.850 "ddgst": false 00:25:42.850 }, 00:25:42.850 "method": "bdev_nvme_attach_controller" 00:25:42.850 }' 00:25:42.850 [2024-06-08 21:21:20.870430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.850 [2024-06-08 21:21:20.932959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.765 Running I/O for 10 seconds... 00:25:45.026 21:21:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.026 21:21:22 -- common/autotest_common.sh@852 -- # return 0 00:25:45.026 21:21:22 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:45.026 21:21:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.026 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:45.026 21:21:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.026 21:21:22 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:45.026 21:21:22 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:45.026 21:21:22 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:45.026 21:21:22 -- target/shutdown.sh@57 -- # local ret=1 00:25:45.026 21:21:22 -- target/shutdown.sh@58 -- # local i 00:25:45.026 21:21:22 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:45.026 21:21:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:45.026 21:21:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:45.026 21:21:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:45.026 21:21:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.026 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:25:45.026 21:21:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.026 21:21:22 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:45.026 21:21:22 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:45.026 21:21:22 -- target/shutdown.sh@64 -- # ret=0 00:25:45.026 21:21:22 -- target/shutdown.sh@65 -- # break 00:25:45.026 21:21:22 -- target/shutdown.sh@69 -- # return 0 00:25:45.026 21:21:22 -- target/shutdown.sh@109 -- # killprocess 2486640 00:25:45.026 21:21:22 -- common/autotest_common.sh@926 -- # '[' -z 2486640 ']' 00:25:45.026 21:21:22 -- common/autotest_common.sh@930 -- # kill -0 2486640 00:25:45.026 21:21:22 -- common/autotest_common.sh@931 -- # uname 00:25:45.026 21:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.026 21:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2486640 00:25:45.026 21:21:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:45.026 21:21:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:45.026 21:21:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2486640' 00:25:45.026 killing process with pid 2486640 00:25:45.026 21:21:23 -- common/autotest_common.sh@945 -- # kill 2486640 00:25:45.026 21:21:23 -- common/autotest_common.sh@950 -- # wait 2486640 00:25:45.026 Received shutdown signal, test time was about 0.727886 seconds 00:25:45.026 00:25:45.026 Latency(us) 00:25:45.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme1n1 : 0.70 450.02 28.13 0.00 0.00 138581.59 18459.31 123207.68 00:25:45.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme2n1 : 0.67 404.91 25.31 0.00 0.00 152301.53 17476.27 131945.81 00:25:45.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme3n1 : 0.71 380.43 23.78 0.00 0.00 151729.69 16493.23 117090.99 00:25:45.026 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme4n1 : 0.69 396.02 24.75 0.00 0.00 151629.30 16711.68 139810.13 00:25:45.026 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme5n1 : 0.70 450.61 28.16 0.00 0.00 132353.59 17585.49 126702.93 00:25:45.026 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme6n1 : 0.68 399.02 24.94 0.00 0.00 146441.44 16711.68 118838.61 00:25:45.026 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme7n1 : 0.73 374.14 23.38 0.00 0.00 146951.03 18786.99 124081.49 00:25:45.026 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme8n1 : 0.70 447.92 27.99 0.00 0.00 128852.98 10758.83 116217.17 00:25:45.026 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.026 Nvme9n1 : 0.69 453.56 28.35 0.00 0.00 125454.89 11687.25 129324.37 00:25:45.026 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:45.026 Verification LBA range: start 0x0 length 0x400 00:25:45.027 Nvme10n1 : 0.69 396.39 24.77 0.00 0.00 141019.70 5051.73 119712.43 00:25:45.027 =================================================================================================================== 00:25:45.027 Total : 4153.02 259.56 0.00 0.00 140936.29 5051.73 139810.13 00:25:45.287 21:21:23 -- target/shutdown.sh@112 -- # sleep 1 00:25:46.231 21:21:24 -- target/shutdown.sh@113 -- # kill -0 2486254 00:25:46.231 21:21:24 -- target/shutdown.sh@115 -- # stoptarget 00:25:46.231 21:21:24 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:46.231 21:21:24 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:46.231 21:21:24 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.231 21:21:24 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:46.231 21:21:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:46.231 21:21:24 -- nvmf/common.sh@116 -- # sync 00:25:46.231 21:21:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:46.231 21:21:24 -- nvmf/common.sh@119 -- # set +e 00:25:46.231 21:21:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:46.231 21:21:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:46.231 rmmod nvme_tcp 00:25:46.231 rmmod nvme_fabrics 00:25:46.231 rmmod nvme_keyring 00:25:46.491 21:21:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:46.491 21:21:24 -- nvmf/common.sh@123 -- # set -e 00:25:46.491 21:21:24 -- nvmf/common.sh@124 -- # return 0 00:25:46.491 21:21:24 -- nvmf/common.sh@477 -- # '[' -n 2486254 ']' 00:25:46.491 21:21:24 -- nvmf/common.sh@478 -- # killprocess 2486254 00:25:46.491 21:21:24 -- common/autotest_common.sh@926 -- # '[' -z 2486254 ']' 00:25:46.491 21:21:24 -- common/autotest_common.sh@930 -- # kill -0 2486254 00:25:46.491 21:21:24 -- common/autotest_common.sh@931 -- # uname 00:25:46.491 21:21:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.492 21:21:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2486254 00:25:46.492 21:21:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:46.492 21:21:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:46.492 21:21:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2486254' 00:25:46.492 killing process with pid 2486254 00:25:46.492 21:21:24 -- common/autotest_common.sh@945 -- # kill 2486254 00:25:46.492 21:21:24 -- common/autotest_common.sh@950 -- # wait 2486254 00:25:46.784 21:21:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:46.784 21:21:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:46.784 21:21:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:46.784 21:21:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.784 21:21:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:46.784 21:21:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.784 21:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.784 21:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.699 21:21:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.699 00:25:48.699 real 0m7.685s 00:25:48.699 user 0m22.801s 00:25:48.699 sys 0m1.254s 00:25:48.699 21:21:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.699 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:48.699 ************************************ 00:25:48.699 END TEST nvmf_shutdown_tc2 00:25:48.699 ************************************ 00:25:48.699 21:21:26 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:48.699 21:21:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:48.699 21:21:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.699 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:48.699 ************************************ 00:25:48.699 START TEST nvmf_shutdown_tc3 00:25:48.699 ************************************ 00:25:48.699 21:21:26 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:48.699 21:21:26 -- target/shutdown.sh@120 -- # starttarget 00:25:48.699 21:21:26 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:48.699 21:21:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.699 21:21:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.699 21:21:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.699 21:21:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.699 21:21:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.699 21:21:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.699 21:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.699 21:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.699 21:21:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.699 21:21:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.699 21:21:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.699 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:48.699 21:21:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:48.699 21:21:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:48.699 21:21:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:48.699 21:21:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:48.699 21:21:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:48.699 21:21:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:48.699 21:21:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:48.699 21:21:26 -- nvmf/common.sh@294 -- # net_devs=() 00:25:48.699 21:21:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:48.699 21:21:26 -- nvmf/common.sh@295 -- # e810=() 00:25:48.699 21:21:26 -- nvmf/common.sh@295 -- # local -ga e810 00:25:48.699 21:21:26 -- nvmf/common.sh@296 -- # x722=() 00:25:48.699 21:21:26 -- nvmf/common.sh@296 -- # local -ga x722 00:25:48.699 21:21:26 -- nvmf/common.sh@297 -- # mlx=() 00:25:48.699 21:21:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:48.699 21:21:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.699 21:21:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.700 21:21:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.700 21:21:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.700 21:21:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:48.700 21:21:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:48.700 21:21:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.700 21:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.700 21:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.700 21:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.700 21:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.700 21:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.700 21:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.700 21:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.700 21:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.700 21:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.700 21:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.700 21:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.700 21:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.700 21:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.700 21:21:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:48.700 21:21:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:48.700 21:21:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:48.700 21:21:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.700 21:21:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.700 21:21:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.700 21:21:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:48.700 21:21:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.700 21:21:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.700 21:21:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:48.700 21:21:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.700 21:21:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.700 21:21:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:48.700 21:21:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:48.700 21:21:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.700 21:21:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.979 21:21:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.979 21:21:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.979 21:21:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:48.979 21:21:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.979 21:21:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.979 21:21:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.979 21:21:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:48.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:48.979 00:25:48.979 --- 10.0.0.2 ping statistics --- 00:25:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.979 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:48.979 21:21:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:25:48.979 00:25:48.979 --- 10.0.0.1 ping statistics --- 00:25:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.979 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:25:48.980 21:21:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.980 21:21:27 -- nvmf/common.sh@410 -- # return 0 00:25:48.980 21:21:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:48.980 21:21:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.980 21:21:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:48.980 21:21:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:48.980 21:21:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.980 21:21:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:48.980 21:21:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.240 21:21:27 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:49.240 21:21:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:49.240 21:21:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:49.240 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:49.240 21:21:27 -- nvmf/common.sh@469 -- # nvmfpid=2487810 00:25:49.240 21:21:27 -- nvmf/common.sh@470 -- # waitforlisten 2487810 00:25:49.240 21:21:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:49.240 21:21:27 -- common/autotest_common.sh@819 -- # '[' -z 2487810 ']' 00:25:49.240 21:21:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.240 21:21:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.240 21:21:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.240 21:21:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.240 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:49.240 [2024-06-08 21:21:27.166173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:49.240 [2024-06-08 21:21:27.166258] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.240 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.240 [2024-06-08 21:21:27.252673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.240 [2024-06-08 21:21:27.321390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.241 [2024-06-08 21:21:27.321518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.241 [2024-06-08 21:21:27.321526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.241 [2024-06-08 21:21:27.321533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.241 [2024-06-08 21:21:27.321642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.241 [2024-06-08 21:21:27.321803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.241 [2024-06-08 21:21:27.321924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.241 [2024-06-08 21:21:27.321927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.183 21:21:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.183 21:21:27 -- common/autotest_common.sh@852 -- # return 0 00:25:50.183 21:21:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:50.183 21:21:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:50.183 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:50.183 21:21:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.183 21:21:27 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.183 21:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.183 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:50.183 [2024-06-08 21:21:27.974391] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.183 21:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.183 21:21:27 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:50.183 21:21:27 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:50.183 21:21:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:50.183 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:50.183 21:21:27 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:50.183 21:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:27 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:27 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:27 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:50.183 21:21:28 -- target/shutdown.sh@28 -- # cat 00:25:50.183 21:21:28 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:50.183 21:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.183 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:50.183 Malloc1 00:25:50.183 [2024-06-08 21:21:28.072971] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.183 Malloc2 00:25:50.183 Malloc3 00:25:50.183 Malloc4 00:25:50.183 Malloc5 00:25:50.183 Malloc6 00:25:50.444 Malloc7 00:25:50.444 Malloc8 00:25:50.444 Malloc9 00:25:50.444 Malloc10 00:25:50.444 21:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.444 21:21:28 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:50.444 21:21:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:50.444 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:50.444 21:21:28 -- target/shutdown.sh@124 -- # perfpid=2488193 00:25:50.444 21:21:28 -- target/shutdown.sh@125 -- # waitforlisten 2488193 /var/tmp/bdevperf.sock 00:25:50.444 21:21:28 -- common/autotest_common.sh@819 -- # '[' -z 2488193 ']' 00:25:50.444 21:21:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.444 21:21:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.444 21:21:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.444 21:21:28 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:50.444 21:21:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.444 21:21:28 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:50.444 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:50.444 21:21:28 -- nvmf/common.sh@520 -- # config=() 00:25:50.444 21:21:28 -- nvmf/common.sh@520 -- # local subsystem config 00:25:50.444 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.444 { 00:25:50.444 "params": { 00:25:50.444 "name": "Nvme$subsystem", 00:25:50.444 "trtype": "$TEST_TRANSPORT", 00:25:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.444 "adrfam": "ipv4", 00:25:50.444 "trsvcid": "$NVMF_PORT", 00:25:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.444 "hdgst": ${hdgst:-false}, 00:25:50.444 "ddgst": ${ddgst:-false} 00:25:50.444 }, 00:25:50.444 "method": "bdev_nvme_attach_controller" 00:25:50.444 } 00:25:50.444 EOF 00:25:50.444 )") 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.444 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.444 { 00:25:50.444 "params": { 00:25:50.444 "name": "Nvme$subsystem", 00:25:50.444 "trtype": "$TEST_TRANSPORT", 00:25:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.444 "adrfam": "ipv4", 00:25:50.444 "trsvcid": "$NVMF_PORT", 00:25:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.444 "hdgst": ${hdgst:-false}, 00:25:50.444 "ddgst": ${ddgst:-false} 00:25:50.444 }, 00:25:50.444 "method": "bdev_nvme_attach_controller" 00:25:50.444 } 00:25:50.444 EOF 00:25:50.444 )") 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.444 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.444 { 00:25:50.444 "params": { 00:25:50.444 "name": "Nvme$subsystem", 00:25:50.444 "trtype": "$TEST_TRANSPORT", 00:25:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.444 "adrfam": "ipv4", 00:25:50.444 "trsvcid": "$NVMF_PORT", 00:25:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.444 "hdgst": ${hdgst:-false}, 00:25:50.444 "ddgst": ${ddgst:-false} 00:25:50.444 }, 00:25:50.444 "method": "bdev_nvme_attach_controller" 00:25:50.444 } 00:25:50.444 EOF 00:25:50.444 )") 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.444 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.444 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.444 { 00:25:50.444 "params": { 00:25:50.444 "name": "Nvme$subsystem", 00:25:50.444 "trtype": "$TEST_TRANSPORT", 00:25:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.444 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.445 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.445 { 00:25:50.445 "params": { 00:25:50.445 "name": "Nvme$subsystem", 00:25:50.445 "trtype": "$TEST_TRANSPORT", 00:25:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.445 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.445 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.445 { 00:25:50.445 "params": { 00:25:50.445 "name": "Nvme$subsystem", 00:25:50.445 "trtype": "$TEST_TRANSPORT", 00:25:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.445 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.445 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.445 { 00:25:50.445 "params": { 00:25:50.445 "name": "Nvme$subsystem", 00:25:50.445 "trtype": "$TEST_TRANSPORT", 00:25:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.445 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 [2024-06-08 21:21:28.517363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:50.445 [2024-06-08 21:21:28.517450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488193 ] 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.445 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.445 { 00:25:50.445 "params": { 00:25:50.445 "name": "Nvme$subsystem", 00:25:50.445 "trtype": "$TEST_TRANSPORT", 00:25:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.445 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.445 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.445 { 00:25:50.445 "params": { 00:25:50.445 "name": "Nvme$subsystem", 00:25:50.445 "trtype": "$TEST_TRANSPORT", 00:25:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.445 "adrfam": "ipv4", 00:25:50.445 "trsvcid": "$NVMF_PORT", 00:25:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.445 "hdgst": ${hdgst:-false}, 00:25:50.445 "ddgst": ${ddgst:-false} 00:25:50.445 }, 00:25:50.445 "method": "bdev_nvme_attach_controller" 00:25:50.445 } 00:25:50.445 EOF 00:25:50.445 )") 00:25:50.445 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.706 21:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.706 21:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.706 { 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme$subsystem", 00:25:50.706 "trtype": "$TEST_TRANSPORT", 00:25:50.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "$NVMF_PORT", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.706 "hdgst": ${hdgst:-false}, 00:25:50.706 "ddgst": ${ddgst:-false} 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 } 00:25:50.706 EOF 00:25:50.706 )") 00:25:50.706 21:21:28 -- nvmf/common.sh@542 -- # cat 00:25:50.706 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.706 21:21:28 -- nvmf/common.sh@544 -- # jq . 00:25:50.706 21:21:28 -- nvmf/common.sh@545 -- # IFS=, 00:25:50.706 21:21:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme1", 00:25:50.706 "trtype": "tcp", 00:25:50.706 "traddr": "10.0.0.2", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "4420", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.706 "hdgst": false, 00:25:50.706 "ddgst": false 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 },{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme2", 00:25:50.706 "trtype": "tcp", 00:25:50.706 "traddr": "10.0.0.2", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "4420", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:50.706 "hdgst": false, 00:25:50.706 "ddgst": false 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 },{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme3", 00:25:50.706 "trtype": "tcp", 00:25:50.706 "traddr": "10.0.0.2", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "4420", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:50.706 "hdgst": false, 00:25:50.706 "ddgst": false 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 },{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme4", 00:25:50.706 "trtype": "tcp", 00:25:50.706 "traddr": "10.0.0.2", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "4420", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:50.706 "hdgst": false, 00:25:50.706 "ddgst": false 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 },{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme5", 00:25:50.706 "trtype": "tcp", 00:25:50.706 "traddr": "10.0.0.2", 00:25:50.706 "adrfam": "ipv4", 00:25:50.706 "trsvcid": "4420", 00:25:50.706 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:50.706 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:50.706 "hdgst": false, 00:25:50.706 "ddgst": false 00:25:50.706 }, 00:25:50.706 "method": "bdev_nvme_attach_controller" 00:25:50.706 },{ 00:25:50.706 "params": { 00:25:50.706 "name": "Nvme6", 00:25:50.707 "trtype": "tcp", 00:25:50.707 "traddr": "10.0.0.2", 00:25:50.707 "adrfam": "ipv4", 00:25:50.707 "trsvcid": "4420", 00:25:50.707 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:50.707 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:50.707 "hdgst": false, 00:25:50.707 "ddgst": false 00:25:50.707 }, 00:25:50.707 "method": "bdev_nvme_attach_controller" 00:25:50.707 },{ 00:25:50.707 "params": { 00:25:50.707 "name": "Nvme7", 00:25:50.707 "trtype": "tcp", 00:25:50.707 "traddr": "10.0.0.2", 00:25:50.707 "adrfam": "ipv4", 00:25:50.707 "trsvcid": "4420", 00:25:50.707 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:50.707 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:50.707 "hdgst": false, 00:25:50.707 "ddgst": false 00:25:50.707 }, 00:25:50.707 "method": "bdev_nvme_attach_controller" 00:25:50.707 },{ 00:25:50.707 "params": { 00:25:50.707 "name": "Nvme8", 00:25:50.707 "trtype": "tcp", 00:25:50.707 "traddr": "10.0.0.2", 00:25:50.707 "adrfam": "ipv4", 00:25:50.707 "trsvcid": "4420", 00:25:50.707 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:50.707 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:50.707 "hdgst": false, 00:25:50.707 "ddgst": false 00:25:50.707 }, 00:25:50.707 "method": "bdev_nvme_attach_controller" 00:25:50.707 },{ 00:25:50.707 "params": { 00:25:50.707 "name": "Nvme9", 00:25:50.707 "trtype": "tcp", 00:25:50.707 "traddr": "10.0.0.2", 00:25:50.707 "adrfam": "ipv4", 00:25:50.707 "trsvcid": "4420", 00:25:50.707 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:50.707 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:50.707 "hdgst": false, 00:25:50.707 "ddgst": false 00:25:50.707 }, 00:25:50.707 "method": "bdev_nvme_attach_controller" 00:25:50.707 },{ 00:25:50.707 "params": { 00:25:50.707 "name": "Nvme10", 00:25:50.707 "trtype": "tcp", 00:25:50.707 "traddr": "10.0.0.2", 00:25:50.707 "adrfam": "ipv4", 00:25:50.707 "trsvcid": "4420", 00:25:50.707 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:50.707 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:50.707 "hdgst": false, 00:25:50.707 "ddgst": false 00:25:50.707 }, 00:25:50.707 "method": "bdev_nvme_attach_controller" 00:25:50.707 }' 00:25:50.707 [2024-06-08 21:21:28.579283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.707 [2024-06-08 21:21:28.642206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.620 Running I/O for 10 seconds... 00:25:52.620 21:21:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.620 21:21:30 -- common/autotest_common.sh@852 -- # return 0 00:25:52.620 21:21:30 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:52.620 21:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.620 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:52.620 21:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.620 21:21:30 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.620 21:21:30 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:52.620 21:21:30 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:52.620 21:21:30 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:52.620 21:21:30 -- target/shutdown.sh@57 -- # local ret=1 00:25:52.620 21:21:30 -- target/shutdown.sh@58 -- # local i 00:25:52.620 21:21:30 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:52.620 21:21:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:52.620 21:21:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.620 21:21:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.620 21:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.620 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:52.620 21:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.620 21:21:30 -- target/shutdown.sh@60 -- # read_io_count=87 00:25:52.620 21:21:30 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:25:52.620 21:21:30 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:52.880 21:21:30 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:52.880 21:21:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:52.880 21:21:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.880 21:21:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.880 21:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.880 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:52.880 21:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.880 21:21:30 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:52.880 21:21:30 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:52.880 21:21:30 -- target/shutdown.sh@64 -- # ret=0 00:25:52.880 21:21:30 -- target/shutdown.sh@65 -- # break 00:25:52.880 21:21:30 -- target/shutdown.sh@69 -- # return 0 00:25:52.880 21:21:30 -- target/shutdown.sh@134 -- # killprocess 2487810 00:25:52.881 21:21:30 -- common/autotest_common.sh@926 -- # '[' -z 2487810 ']' 00:25:52.881 21:21:30 -- common/autotest_common.sh@930 -- # kill -0 2487810 00:25:52.881 21:21:30 -- common/autotest_common.sh@931 -- # uname 00:25:52.881 21:21:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.881 21:21:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2487810 00:25:53.151 21:21:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:53.151 21:21:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:53.151 21:21:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2487810' 00:25:53.151 killing process with pid 2487810 00:25:53.151 21:21:31 -- common/autotest_common.sh@945 -- # kill 2487810 00:25:53.151 21:21:31 -- common/autotest_common.sh@950 -- # wait 2487810 00:25:53.151 [2024-06-08 21:21:31.005743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.151 [2024-06-08 21:21:31.005944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.005999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.006081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed8e0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.007997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.152 [2024-06-08 21:21:31.008199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.008305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20716f0 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.153 [2024-06-08 21:21:31.009674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.009755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedd90 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.010997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.154 [2024-06-08 21:21:31.011200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.011254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee240 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fee6d0 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.155 [2024-06-08 21:21:31.012911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.012995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feeb80 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.156 [2024-06-08 21:21:31.013851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.013998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-06-08 21:21:31.014031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:53.157 the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with [2024-06-08 21:21:31.014079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:25:53.157 id:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fef030 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a020 is same with the state(5) to be set 00:25:53.157 [2024-06-08 21:21:31.014143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.157 [2024-06-08 21:21:31.014190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.157 [2024-06-08 21:21:31.014197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7060 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.014226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d37c0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.014305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6520 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.014413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993940 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.014497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.158 [2024-06-08 21:21:31.014550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.014557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99590 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.014790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070920 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.158 [2024-06-08 21:21:31.015753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.158 [2024-06-08 21:21:31.015775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.015791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.158 [2024-06-08 21:21:31.015799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.158 [2024-06-08 21:21:31.015808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.015986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.015995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.159 [2024-06-08 21:21:31.016317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.159 [2024-06-08 21:21:31.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016873] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bae80 was disconnected and freed. reset controller. 00:25:53.160 [2024-06-08 21:21:31.016906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.160 [2024-06-08 21:21:31.016943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.160 [2024-06-08 21:21:31.016950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.016959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.016966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.016975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.016982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.016992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.016999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.161 [2024-06-08 21:21:31.017484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.161 [2024-06-08 21:21:31.017491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.017681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.162 [2024-06-08 21:21:31.017688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.162 [2024-06-08 21:21:31.025395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.162 [2024-06-08 21:21:31.025602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.025657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070db0 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.163 [2024-06-08 21:21:31.026520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071240 is same with the state(5) to be set 00:25:53.164 [2024-06-08 21:21:31.034863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.034984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.034993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035234] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bc460 was disconnected and freed. reset controller. 00:25:53.164 [2024-06-08 21:21:31.035321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.164 [2024-06-08 21:21:31.035610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.164 [2024-06-08 21:21:31.035618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.035988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.035996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.165 [2024-06-08 21:21:31.036109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.165 [2024-06-08 21:21:31.036118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036451] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bf020 was disconnected and freed. reset controller. 00:25:53.166 [2024-06-08 21:21:31.036574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.166 [2024-06-08 21:21:31.036803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.166 [2024-06-08 21:21:31.036812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.036988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.167 [2024-06-08 21:21:31.037254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.167 [2024-06-08 21:21:31.037263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.037514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.037521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.044740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.168 [2024-06-08 21:21:31.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045088] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18c3180 was disconnected and freed. reset controller. 00:25:53.168 [2024-06-08 21:21:31.045175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9a020 (9): Bad file descriptor 00:25:53.168 [2024-06-08 21:21:31.045196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7060 (9): Bad file descriptor 00:25:53.168 [2024-06-08 21:21:31.045209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d37c0 (9): Bad file descriptor 00:25:53.168 [2024-06-08 21:21:31.045224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d6520 (9): Bad file descriptor 00:25:53.168 [2024-06-08 21:21:31.045265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19accd0 is same with the state(5) to be set 00:25:53.168 [2024-06-08 21:21:31.045354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.168 [2024-06-08 21:21:31.045387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.168 [2024-06-08 21:21:31.045394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19945f0 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.045449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fed70 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.045534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1993940 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.045546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99590 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.045566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.169 [2024-06-08 21:21:31.045619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.169 [2024-06-08 21:21:31.045626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce160 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.050647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:53.169 [2024-06-08 21:21:31.050678] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:53.169 [2024-06-08 21:21:31.050755] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.051013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:53.169 [2024-06-08 21:21:31.051031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:53.169 [2024-06-08 21:21:31.051052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce160 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.051206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.051650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.051689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7060 with addr=10.0.0.2, port=4420 00:25:53.169 [2024-06-08 21:21:31.051707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7060 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.052162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.052732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.052769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a99590 with addr=10.0.0.2, port=4420 00:25:53.169 [2024-06-08 21:21:31.052779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99590 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.052847] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.052889] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.053481] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.054443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.054561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.054572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1993940 with addr=10.0.0.2, port=4420 00:25:53.169 [2024-06-08 21:21:31.054580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993940 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.054605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7060 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.054616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99590 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.054712] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.054750] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:53.169 [2024-06-08 21:21:31.054994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.055482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.169 [2024-06-08 21:21:31.055493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ce160 with addr=10.0.0.2, port=4420 00:25:53.169 [2024-06-08 21:21:31.055500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce160 is same with the state(5) to be set 00:25:53.169 [2024-06-08 21:21:31.055511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1993940 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.055520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:53.169 [2024-06-08 21:21:31.055527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:53.169 [2024-06-08 21:21:31.055535] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:53.169 [2024-06-08 21:21:31.055550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:53.169 [2024-06-08 21:21:31.055556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:53.169 [2024-06-08 21:21:31.055563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:53.169 [2024-06-08 21:21:31.055646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.169 [2024-06-08 21:21:31.055655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.169 [2024-06-08 21:21:31.055663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce160 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.055670] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:53.169 [2024-06-08 21:21:31.055681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:53.169 [2024-06-08 21:21:31.055688] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:53.169 [2024-06-08 21:21:31.055721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19accd0 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.055738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19945f0 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.055754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fed70 (9): Bad file descriptor 00:25:53.169 [2024-06-08 21:21:31.055807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.169 [2024-06-08 21:21:31.055825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:53.169 [2024-06-08 21:21:31.055832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:53.169 [2024-06-08 21:21:31.055838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:53.169 [2024-06-08 21:21:31.055876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.055987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.055994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.170 [2024-06-08 21:21:31.056427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.170 [2024-06-08 21:21:31.056436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.171 [2024-06-08 21:21:31.056933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.171 [2024-06-08 21:21:31.056941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4260 is same with the state(5) to be set 00:25:53.172 [2024-06-08 21:21:31.058194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.172 [2024-06-08 21:21:31.058722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.172 [2024-06-08 21:21:31.058730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.058990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.058999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.173 [2024-06-08 21:21:31.059167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.173 [2024-06-08 21:21:31.059176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.059265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.059273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a64700 is same with the state(5) to be set 00:25:53.174 [2024-06-08 21:21:31.060518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.174 [2024-06-08 21:21:31.060990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.174 [2024-06-08 21:21:31.060998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.175 [2024-06-08 21:21:31.061527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.175 [2024-06-08 21:21:31.061534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.061543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.061551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.061561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.061567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.061576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.061583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.061592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a70b50 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.062839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.176 [2024-06-08 21:21:31.062851] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.176 [2024-06-08 21:21:31.062861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:53.176 [2024-06-08 21:21:31.062870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:53.176 [2024-06-08 21:21:31.063417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.063861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.063871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d37c0 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.063879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d37c0 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.064341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.064786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.064796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9a020 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.064803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a020 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.065280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.065783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.065793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d6520 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.065800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6520 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.066620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:53.176 [2024-06-08 21:21:31.066633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:53.176 [2024-06-08 21:21:31.066641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:53.176 [2024-06-08 21:21:31.066667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d37c0 (9): Bad file descriptor 00:25:53.176 [2024-06-08 21:21:31.066677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9a020 (9): Bad file descriptor 00:25:53.176 [2024-06-08 21:21:31.066686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d6520 (9): Bad file descriptor 00:25:53.176 [2024-06-08 21:21:31.067246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.067790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.067829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a99590 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.067842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99590 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.068305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.068887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.068924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7060 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.068936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7060 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.069416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.069810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.176 [2024-06-08 21:21:31.069820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1993940 with addr=10.0.0.2, port=4420 00:25:53.176 [2024-06-08 21:21:31.069827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993940 is same with the state(5) to be set 00:25:53.176 [2024-06-08 21:21:31.069835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.176 [2024-06-08 21:21:31.069841] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.176 [2024-06-08 21:21:31.069850] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.176 [2024-06-08 21:21:31.069866] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:53.176 [2024-06-08 21:21:31.069872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:53.176 [2024-06-08 21:21:31.069884] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:53.176 [2024-06-08 21:21:31.069895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:53.176 [2024-06-08 21:21:31.069901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:53.176 [2024-06-08 21:21:31.069908] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:53.176 [2024-06-08 21:21:31.069971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.069983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.069999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.176 [2024-06-08 21:21:31.070167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.176 [2024-06-08 21:21:31.070174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.177 [2024-06-08 21:21:31.070644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.177 [2024-06-08 21:21:31.070650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.070986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.070993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.071002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.071009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.071018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.071024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.071032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bda40 is same with the state(5) to be set 00:25:53.178 [2024-06-08 21:21:31.072298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.072312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.072324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.072333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.072344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.072353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.178 [2024-06-08 21:21:31.072365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.178 [2024-06-08 21:21:31.072374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.179 [2024-06-08 21:21:31.072821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.179 [2024-06-08 21:21:31.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.072984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.072992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.180 [2024-06-08 21:21:31.073345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.180 [2024-06-08 21:21:31.073352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.073360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.073367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.073375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c0600 is same with the state(5) to be set 00:25:53.181 [2024-06-08 21:21:31.074613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.074989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.181 [2024-06-08 21:21:31.075132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.181 [2024-06-08 21:21:31.075141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.182 [2024-06-08 21:21:31.075643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.182 [2024-06-08 21:21:31.075650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.183 [2024-06-08 21:21:31.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.183 [2024-06-08 21:21:31.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.183 [2024-06-08 21:21:31.075673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c1ba0 is same with the state(5) to be set 00:25:53.183 [2024-06-08 21:21:31.077119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:53.183 [2024-06-08 21:21:31.077141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.183 [2024-06-08 21:21:31.077148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.183 [2024-06-08 21:21:31.077154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.183 [2024-06-08 21:21:31.077161] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:53.183 [2024-06-08 21:21:31.077170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:53.183 [2024-06-08 21:21:31.077209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99590 (9): Bad file descriptor 00:25:53.183 [2024-06-08 21:21:31.077219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7060 (9): Bad file descriptor 00:25:53.183 [2024-06-08 21:21:31.077228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1993940 (9): Bad file descriptor 00:25:53.183 [2024-06-08 21:21:31.077279] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.183 [2024-06-08 21:21:31.077290] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.183 [2024-06-08 21:21:31.077301] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.183 [2024-06-08 21:21:31.077310] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.183 task offset: 34816 on job bdev=Nvme4n1 fails 00:25:53.183 00:25:53.183 Latency(us) 00:25:53.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme1n1 ended in about 0.76 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme1n1 : 0.76 274.34 17.15 84.41 0.00 177193.76 90876.59 175636.48 00:25:53.183 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme2n1 ended in about 0.76 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme2n1 : 0.76 391.84 24.49 84.15 0.00 132097.21 10977.28 115343.36 00:25:53.183 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme3n1 ended in about 0.76 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme3n1 : 0.76 329.04 20.56 83.90 0.00 150680.09 85196.80 135441.07 00:25:53.183 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme4n1 ended in about 0.75 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme4n1 : 0.75 335.97 21.00 85.67 0.00 145850.89 41943.04 136314.88 00:25:53.183 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme5n1 ended in about 0.75 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme5n1 : 0.75 388.90 24.31 85.53 0.00 128160.11 43035.31 107915.95 00:25:53.183 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme6n1 ended in about 0.77 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme6n1 : 0.77 382.00 23.87 82.87 0.00 129562.91 18022.40 125829.12 00:25:53.183 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme7n1 ended in about 0.75 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme7n1 : 0.75 334.90 20.93 85.39 0.00 141445.41 32986.45 133693.44 00:25:53.183 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme8n1 ended in about 0.77 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme8n1 : 0.77 324.04 20.25 82.62 0.00 144977.42 66846.72 151169.71 00:25:53.183 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme9n1 ended in about 0.78 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme9n1 : 0.78 323.09 20.19 82.38 0.00 143883.77 68157.44 142431.57 00:25:53.183 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.183 Job: Nvme10n1 ended in about 0.75 seconds with error 00:25:53.183 Verification LBA range: start 0x0 length 0x400 00:25:53.183 Nvme10n1 : 0.75 334.34 20.90 85.25 0.00 137043.06 46312.11 120586.24 00:25:53.183 =================================================================================================================== 00:25:53.183 Total : 3418.46 213.65 842.18 0.00 142108.78 10977.28 175636.48 00:25:53.183 [2024-06-08 21:21:31.101524] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:53.183 [2024-06-08 21:21:31.101556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:53.183 [2024-06-08 21:21:31.102093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.102581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.102591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ce160 with addr=10.0.0.2, port=4420 00:25:53.183 [2024-06-08 21:21:31.102600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce160 is same with the state(5) to be set 00:25:53.183 [2024-06-08 21:21:31.103072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.103508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.103518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19accd0 with addr=10.0.0.2, port=4420 00:25:53.183 [2024-06-08 21:21:31.103525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19accd0 is same with the state(5) to be set 00:25:53.183 [2024-06-08 21:21:31.103952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.104428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.183 [2024-06-08 21:21:31.104437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19945f0 with addr=10.0.0.2, port=4420 00:25:53.183 [2024-06-08 21:21:31.104444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19945f0 is same with the state(5) to be set 00:25:53.183 [2024-06-08 21:21:31.104453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:53.183 [2024-06-08 21:21:31.104460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:53.183 [2024-06-08 21:21:31.104468] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:53.183 [2024-06-08 21:21:31.104480] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:53.183 [2024-06-08 21:21:31.104487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:53.183 [2024-06-08 21:21:31.104493] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:53.183 [2024-06-08 21:21:31.104503] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:53.183 [2024-06-08 21:21:31.104509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:53.183 [2024-06-08 21:21:31.104516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:53.183 [2024-06-08 21:21:31.105366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:53.183 [2024-06-08 21:21:31.105378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:53.183 [2024-06-08 21:21:31.105387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.184 [2024-06-08 21:21:31.105397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.105408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.105414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.105666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.105947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.105956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fed70 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.105963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fed70 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.105974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce160 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.105984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19accd0 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.105993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19945f0 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.106037] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.184 [2024-06-08 21:21:31.106052] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.184 [2024-06-08 21:21:31.106061] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.184 [2024-06-08 21:21:31.106510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.106960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.106969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d6520 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.106976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6520 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.107301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.107649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.107657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9a020 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.107664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a020 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.108123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.108435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.108445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d37c0 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.108452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d37c0 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.108461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fed70 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.108469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.108475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.108482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:53.184 [2024-06-08 21:21:31.108492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.108498] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.108504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:53.184 [2024-06-08 21:21:31.108514] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.108520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.108526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:53.184 [2024-06-08 21:21:31.108576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:53.184 [2024-06-08 21:21:31.108586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:53.184 [2024-06-08 21:21:31.108594] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:53.184 [2024-06-08 21:21:31.108602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.108609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.108615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.108637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d6520 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.108649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9a020 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.108658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d37c0 (9): Bad file descriptor 00:25:53.184 [2024-06-08 21:21:31.108665] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.108672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.108678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:53.184 [2024-06-08 21:21:31.108707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.184 [2024-06-08 21:21:31.109182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.109573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.109582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1993940 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.109589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993940 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.110060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.110541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.110550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7060 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.110558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7060 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.111005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.111493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.184 [2024-06-08 21:21:31.111502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a99590 with addr=10.0.0.2, port=4420 00:25:53.184 [2024-06-08 21:21:31.111509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a99590 is same with the state(5) to be set 00:25:53.184 [2024-06-08 21:21:31.111517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.111523] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.111529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:53.184 [2024-06-08 21:21:31.111539] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.111545] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:53.184 [2024-06-08 21:21:31.111551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:53.184 [2024-06-08 21:21:31.111560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.184 [2024-06-08 21:21:31.111566] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.185 [2024-06-08 21:21:31.111573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.185 [2024-06-08 21:21:31.111602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.185 [2024-06-08 21:21:31.111609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.185 [2024-06-08 21:21:31.111615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.446 21:21:31 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:53.446 21:21:31 -- target/shutdown.sh@138 -- # sleep 1 00:25:53.446 [2024-06-08 21:21:31.339702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1993940 (9): Bad file descriptor 00:25:53.446 [2024-06-08 21:21:31.339788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7060 (9): Bad file descriptor 00:25:53.446 [2024-06-08 21:21:31.339819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99590 (9): Bad file descriptor 00:25:53.446 [2024-06-08 21:21:31.339986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:53.446 [2024-06-08 21:21:31.340016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:53.446 [2024-06-08 21:21:31.340039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:53.446 [2024-06-08 21:21:31.340073] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:53.446 [2024-06-08 21:21:31.340093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:53.446 [2024-06-08 21:21:31.340113] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:53.446 [2024-06-08 21:21:31.340140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:53.446 [2024-06-08 21:21:31.340159] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:53.446 [2024-06-08 21:21:31.340178] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:53.446 [2024-06-08 21:21:31.340255] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.446 [2024-06-08 21:21:31.340268] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.446 [2024-06-08 21:21:31.340279] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:53.446 [2024-06-08 21:21:31.340315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.446 [2024-06-08 21:21:31.340324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.446 [2024-06-08 21:21:31.340330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.340376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340397] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340469] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:53.447 [2024-06-08 21:21:31.340477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:53.447 [2024-06-08 21:21:31.341000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.341407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.341418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fed70 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.341430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fed70 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.341987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.342623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.342661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19945f0 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.342679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19945f0 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.343159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.343665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.343704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19accd0 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.343715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19accd0 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.344191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.344446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.344464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ce160 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.344472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce160 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.344786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.345094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.345104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d37c0 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.345112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d37c0 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.345683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.346178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.346192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9a020 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.346202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9a020 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.346730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.347224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.447 [2024-06-08 21:21:31.347238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d6520 with addr=10.0.0.2, port=4420 00:25:53.447 [2024-06-08 21:21:31.347247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d6520 is same with the state(5) to be set 00:25:53.447 [2024-06-08 21:21:31.347261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fed70 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19945f0 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19accd0 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce160 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d37c0 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9a020 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d6520 (9): Bad file descriptor 00:25:53.447 [2024-06-08 21:21:31.347357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347364] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347386] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347424] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347496] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347526] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347542] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347555] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:53.447 [2024-06-08 21:21:31.347571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:53.447 [2024-06-08 21:21:31.347578] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:53.447 [2024-06-08 21:21:31.347606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.447 [2024-06-08 21:21:31.347627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.390 21:21:32 -- target/shutdown.sh@141 -- # kill -9 2488193 00:25:54.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2488193) - No such process 00:25:54.390 21:21:32 -- target/shutdown.sh@141 -- # true 00:25:54.390 21:21:32 -- target/shutdown.sh@143 -- # stoptarget 00:25:54.390 21:21:32 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:54.390 21:21:32 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:54.390 21:21:32 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:54.390 21:21:32 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:54.390 21:21:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:54.390 21:21:32 -- nvmf/common.sh@116 -- # sync 00:25:54.390 21:21:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:54.390 21:21:32 -- nvmf/common.sh@119 -- # set +e 00:25:54.390 21:21:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:54.390 21:21:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:54.390 rmmod nvme_tcp 00:25:54.390 rmmod nvme_fabrics 00:25:54.390 rmmod nvme_keyring 00:25:54.390 21:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:54.390 21:21:32 -- nvmf/common.sh@123 -- # set -e 00:25:54.390 21:21:32 -- nvmf/common.sh@124 -- # return 0 00:25:54.390 21:21:32 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:54.390 21:21:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:54.390 21:21:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:54.390 21:21:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:54.390 21:21:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.390 21:21:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:54.390 21:21:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.390 21:21:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.390 21:21:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.935 21:21:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:56.935 00:25:56.935 real 0m7.683s 00:25:56.935 user 0m18.744s 00:25:56.935 sys 0m1.235s 00:25:56.935 21:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.935 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.935 ************************************ 00:25:56.935 END TEST nvmf_shutdown_tc3 00:25:56.935 ************************************ 00:25:56.935 21:21:34 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:56.935 00:25:56.935 real 0m32.143s 00:25:56.935 user 1m15.760s 00:25:56.935 sys 0m9.122s 00:25:56.935 21:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.935 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.935 ************************************ 00:25:56.935 END TEST nvmf_shutdown 00:25:56.935 ************************************ 00:25:56.935 21:21:34 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:56.935 21:21:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:56.935 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.935 21:21:34 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:56.935 21:21:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:56.935 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.935 21:21:34 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:56.935 21:21:34 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:56.935 21:21:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:56.935 21:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:56.935 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:56.935 ************************************ 00:25:56.935 START TEST nvmf_multicontroller 00:25:56.935 ************************************ 00:25:56.935 21:21:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:56.935 * Looking for test storage... 00:25:56.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.935 21:21:34 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.935 21:21:34 -- nvmf/common.sh@7 -- # uname -s 00:25:56.935 21:21:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.935 21:21:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.935 21:21:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.935 21:21:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.935 21:21:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.935 21:21:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.935 21:21:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.935 21:21:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.935 21:21:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.935 21:21:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.935 21:21:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:56.935 21:21:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:56.935 21:21:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.935 21:21:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.935 21:21:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.935 21:21:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.935 21:21:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.935 21:21:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.935 21:21:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.935 21:21:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.935 21:21:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.935 21:21:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.935 21:21:34 -- paths/export.sh@5 -- # export PATH 00:25:56.935 21:21:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.935 21:21:34 -- nvmf/common.sh@46 -- # : 0 00:25:56.935 21:21:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:56.935 21:21:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:56.935 21:21:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:56.935 21:21:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.935 21:21:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.935 21:21:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:56.935 21:21:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:56.935 21:21:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:56.935 21:21:34 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.935 21:21:34 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.935 21:21:34 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:56.935 21:21:34 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:56.935 21:21:34 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:56.935 21:21:34 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:56.935 21:21:34 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:56.935 21:21:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:56.935 21:21:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.935 21:21:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:56.935 21:21:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:56.935 21:21:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:56.935 21:21:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.935 21:21:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.935 21:21:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.936 21:21:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:56.936 21:21:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:56.936 21:21:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:56.936 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:03.524 21:21:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:03.524 21:21:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:03.524 21:21:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:03.524 21:21:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:03.524 21:21:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:03.524 21:21:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:03.524 21:21:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:03.524 21:21:41 -- nvmf/common.sh@294 -- # net_devs=() 00:26:03.524 21:21:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:03.524 21:21:41 -- nvmf/common.sh@295 -- # e810=() 00:26:03.524 21:21:41 -- nvmf/common.sh@295 -- # local -ga e810 00:26:03.524 21:21:41 -- nvmf/common.sh@296 -- # x722=() 00:26:03.524 21:21:41 -- nvmf/common.sh@296 -- # local -ga x722 00:26:03.524 21:21:41 -- nvmf/common.sh@297 -- # mlx=() 00:26:03.524 21:21:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:03.524 21:21:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.524 21:21:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:03.524 21:21:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:03.524 21:21:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:03.524 21:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:03.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:03.524 21:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:03.524 21:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:03.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:03.524 21:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:03.524 21:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.524 21:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.524 21:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:03.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:03.524 21:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.524 21:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:03.524 21:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.524 21:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.524 21:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:03.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:03.524 21:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.524 21:21:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:03.524 21:21:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:03.524 21:21:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:03.524 21:21:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.524 21:21:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.524 21:21:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.524 21:21:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:03.524 21:21:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.524 21:21:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.524 21:21:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:03.524 21:21:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.524 21:21:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.524 21:21:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:03.524 21:21:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:03.524 21:21:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.524 21:21:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.524 21:21:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.525 21:21:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.525 21:21:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:03.525 21:21:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.525 21:21:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.525 21:21:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.525 21:21:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:03.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:26:03.525 00:26:03.525 --- 10.0.0.2 ping statistics --- 00:26:03.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.525 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:26:03.525 21:21:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:26:03.525 00:26:03.525 --- 10.0.0.1 ping statistics --- 00:26:03.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.525 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:26:03.525 21:21:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.525 21:21:41 -- nvmf/common.sh@410 -- # return 0 00:26:03.525 21:21:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:03.525 21:21:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.525 21:21:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:03.525 21:21:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:03.525 21:21:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.525 21:21:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:03.525 21:21:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:03.525 21:21:41 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:03.525 21:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:03.525 21:21:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:03.525 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.525 21:21:41 -- nvmf/common.sh@469 -- # nvmfpid=2493144 00:26:03.525 21:21:41 -- nvmf/common.sh@470 -- # waitforlisten 2493144 00:26:03.525 21:21:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:03.525 21:21:41 -- common/autotest_common.sh@819 -- # '[' -z 2493144 ']' 00:26:03.525 21:21:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.786 21:21:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:03.786 21:21:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.786 21:21:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:03.786 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.786 [2024-06-08 21:21:41.664306] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:03.786 [2024-06-08 21:21:41.664378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.786 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.786 [2024-06-08 21:21:41.748546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:03.786 [2024-06-08 21:21:41.811292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:03.786 [2024-06-08 21:21:41.811391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.786 [2024-06-08 21:21:41.811397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.786 [2024-06-08 21:21:41.811410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.786 [2024-06-08 21:21:41.811550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.786 [2024-06-08 21:21:41.811705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.786 [2024-06-08 21:21:41.811708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.357 21:21:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:04.357 21:21:42 -- common/autotest_common.sh@852 -- # return 0 00:26:04.357 21:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:04.357 21:21:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:04.357 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.618 21:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.618 21:21:42 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:04.618 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.618 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.618 [2024-06-08 21:21:42.487774] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.618 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.618 21:21:42 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.618 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.618 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.618 Malloc0 00:26:04.618 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.618 21:21:42 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.618 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.618 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.618 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.618 21:21:42 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.618 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.618 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.618 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 [2024-06-08 21:21:42.551321] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 [2024-06-08 21:21:42.563286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 Malloc1 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:04.619 21:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 21:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.619 21:21:42 -- host/multicontroller.sh@44 -- # bdevperf_pid=2493308 00:26:04.619 21:21:42 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:04.619 21:21:42 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:04.619 21:21:42 -- host/multicontroller.sh@47 -- # waitforlisten 2493308 /var/tmp/bdevperf.sock 00:26:04.619 21:21:42 -- common/autotest_common.sh@819 -- # '[' -z 2493308 ']' 00:26:04.619 21:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.619 21:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.619 21:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.619 21:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.619 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:05.561 21:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:05.561 21:21:43 -- common/autotest_common.sh@852 -- # return 0 00:26:05.561 21:21:43 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:05.561 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.561 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.561 NVMe0n1 00:26:05.561 21:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.561 21:21:43 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.561 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.561 21:21:43 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:05.561 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.561 21:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.561 1 00:26:05.561 21:21:43 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.561 21:21:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:05.561 21:21:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.561 21:21:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.561 21:21:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:05.561 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.561 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.561 request: 00:26:05.561 { 00:26:05.561 "name": "NVMe0", 00:26:05.561 "trtype": "tcp", 00:26:05.561 "traddr": "10.0.0.2", 00:26:05.561 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:05.561 "hostaddr": "10.0.0.2", 00:26:05.561 "hostsvcid": "60000", 00:26:05.561 "adrfam": "ipv4", 00:26:05.561 "trsvcid": "4420", 00:26:05.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.561 "method": "bdev_nvme_attach_controller", 00:26:05.561 "req_id": 1 00:26:05.561 } 00:26:05.561 Got JSON-RPC error response 00:26:05.561 response: 00:26:05.561 { 00:26:05.561 "code": -114, 00:26:05.561 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.561 } 00:26:05.561 21:21:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:05.561 21:21:43 -- common/autotest_common.sh@643 -- # es=1 00:26:05.561 21:21:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:05.561 21:21:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:05.561 21:21:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:05.561 21:21:43 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.561 21:21:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:05.561 21:21:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.561 21:21:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:05.561 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.561 21:21:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:05.561 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.561 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.561 request: 00:26:05.561 { 00:26:05.561 "name": "NVMe0", 00:26:05.561 "trtype": "tcp", 00:26:05.562 "traddr": "10.0.0.2", 00:26:05.562 "hostaddr": "10.0.0.2", 00:26:05.562 "hostsvcid": "60000", 00:26:05.562 "adrfam": "ipv4", 00:26:05.562 "trsvcid": "4420", 00:26:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:05.562 "method": "bdev_nvme_attach_controller", 00:26:05.562 "req_id": 1 00:26:05.562 } 00:26:05.562 Got JSON-RPC error response 00:26:05.562 response: 00:26:05.562 { 00:26:05.562 "code": -114, 00:26:05.562 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.562 } 00:26:05.562 21:21:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@643 -- # es=1 00:26:05.562 21:21:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:05.562 21:21:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:05.562 21:21:43 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:05.562 21:21:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.562 21:21:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 request: 00:26:05.562 { 00:26:05.562 "name": "NVMe0", 00:26:05.562 "trtype": "tcp", 00:26:05.562 "traddr": "10.0.0.2", 00:26:05.562 "hostaddr": "10.0.0.2", 00:26:05.562 "hostsvcid": "60000", 00:26:05.562 "adrfam": "ipv4", 00:26:05.562 "trsvcid": "4420", 00:26:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.562 "multipath": "disable", 00:26:05.562 "method": "bdev_nvme_attach_controller", 00:26:05.562 "req_id": 1 00:26:05.562 } 00:26:05.562 Got JSON-RPC error response 00:26:05.562 response: 00:26:05.562 { 00:26:05.562 "code": -114, 00:26:05.562 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:05.562 } 00:26:05.562 21:21:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@643 -- # es=1 00:26:05.562 21:21:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:05.562 21:21:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:05.562 21:21:43 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.562 21:21:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:05.562 21:21:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.562 21:21:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:05.562 21:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:05.562 21:21:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:05.562 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 request: 00:26:05.562 { 00:26:05.562 "name": "NVMe0", 00:26:05.562 "trtype": "tcp", 00:26:05.562 "traddr": "10.0.0.2", 00:26:05.562 "hostaddr": "10.0.0.2", 00:26:05.562 "hostsvcid": "60000", 00:26:05.562 "adrfam": "ipv4", 00:26:05.562 "trsvcid": "4420", 00:26:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.562 "multipath": "failover", 00:26:05.562 "method": "bdev_nvme_attach_controller", 00:26:05.562 "req_id": 1 00:26:05.562 } 00:26:05.562 Got JSON-RPC error response 00:26:05.562 response: 00:26:05.562 { 00:26:05.562 "code": -114, 00:26:05.562 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:05.562 } 00:26:05.562 21:21:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@643 -- # es=1 00:26:05.562 21:21:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:05.562 21:21:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:05.562 21:21:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:05.562 21:21:43 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.562 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.562 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.823 00:26:05.823 21:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.823 21:21:43 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.823 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.823 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.823 21:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.823 21:21:43 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:05.823 21:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.823 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 00:26:06.119 21:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.119 21:21:44 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.119 21:21:44 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:06.119 21:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.119 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 21:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.119 21:21:44 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:06.119 21:21:44 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:07.505 0 00:26:07.505 21:21:45 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:07.505 21:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.505 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:26:07.505 21:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.505 21:21:45 -- host/multicontroller.sh@100 -- # killprocess 2493308 00:26:07.505 21:21:45 -- common/autotest_common.sh@926 -- # '[' -z 2493308 ']' 00:26:07.505 21:21:45 -- common/autotest_common.sh@930 -- # kill -0 2493308 00:26:07.505 21:21:45 -- common/autotest_common.sh@931 -- # uname 00:26:07.505 21:21:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:07.505 21:21:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2493308 00:26:07.505 21:21:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:07.505 21:21:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:07.505 21:21:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2493308' 00:26:07.505 killing process with pid 2493308 00:26:07.505 21:21:45 -- common/autotest_common.sh@945 -- # kill 2493308 00:26:07.505 21:21:45 -- common/autotest_common.sh@950 -- # wait 2493308 00:26:07.505 21:21:45 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.505 21:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.505 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:26:07.505 21:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.505 21:21:45 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:07.505 21:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.505 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:26:07.505 21:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.505 21:21:45 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:07.505 21:21:45 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.505 21:21:45 -- common/autotest_common.sh@1597 -- # read -r file 00:26:07.505 21:21:45 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:07.505 21:21:45 -- common/autotest_common.sh@1596 -- # sort -u 00:26:07.505 21:21:45 -- common/autotest_common.sh@1598 -- # cat 00:26:07.505 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:07.505 [2024-06-08 21:21:42.678691] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:07.505 [2024-06-08 21:21:42.678745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493308 ] 00:26:07.505 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.505 [2024-06-08 21:21:42.737413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.505 [2024-06-08 21:21:42.800500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.505 [2024-06-08 21:21:44.062232] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name d45213d9-d0a3-47bb-8dd2-624f6a21ccd6 already exists 00:26:07.505 [2024-06-08 21:21:44.062263] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:d45213d9-d0a3-47bb-8dd2-624f6a21ccd6 alias for bdev NVMe1n1 00:26:07.505 [2024-06-08 21:21:44.062273] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:07.505 Running I/O for 1 seconds... 00:26:07.505 00:26:07.505 Latency(us) 00:26:07.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.505 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:07.505 NVMe0n1 : 1.00 28253.37 110.36 0.00 0.00 4516.64 3959.47 18459.31 00:26:07.505 =================================================================================================================== 00:26:07.505 Total : 28253.37 110.36 0.00 0.00 4516.64 3959.47 18459.31 00:26:07.505 Received shutdown signal, test time was about 1.000000 seconds 00:26:07.505 00:26:07.505 Latency(us) 00:26:07.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.506 =================================================================================================================== 00:26:07.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.506 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:07.506 21:21:45 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.506 21:21:45 -- common/autotest_common.sh@1597 -- # read -r file 00:26:07.506 21:21:45 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:07.506 21:21:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:07.506 21:21:45 -- nvmf/common.sh@116 -- # sync 00:26:07.506 21:21:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:07.506 21:21:45 -- nvmf/common.sh@119 -- # set +e 00:26:07.506 21:21:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:07.506 21:21:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:07.506 rmmod nvme_tcp 00:26:07.506 rmmod nvme_fabrics 00:26:07.506 rmmod nvme_keyring 00:26:07.506 21:21:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:07.506 21:21:45 -- nvmf/common.sh@123 -- # set -e 00:26:07.506 21:21:45 -- nvmf/common.sh@124 -- # return 0 00:26:07.506 21:21:45 -- nvmf/common.sh@477 -- # '[' -n 2493144 ']' 00:26:07.506 21:21:45 -- nvmf/common.sh@478 -- # killprocess 2493144 00:26:07.506 21:21:45 -- common/autotest_common.sh@926 -- # '[' -z 2493144 ']' 00:26:07.506 21:21:45 -- common/autotest_common.sh@930 -- # kill -0 2493144 00:26:07.506 21:21:45 -- common/autotest_common.sh@931 -- # uname 00:26:07.506 21:21:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:07.506 21:21:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2493144 00:26:07.506 21:21:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:07.506 21:21:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:07.506 21:21:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2493144' 00:26:07.506 killing process with pid 2493144 00:26:07.506 21:21:45 -- common/autotest_common.sh@945 -- # kill 2493144 00:26:07.506 21:21:45 -- common/autotest_common.sh@950 -- # wait 2493144 00:26:07.766 21:21:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:07.766 21:21:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:07.766 21:21:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:07.766 21:21:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.766 21:21:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:07.766 21:21:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.766 21:21:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.766 21:21:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.681 21:21:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:09.681 00:26:09.681 real 0m13.211s 00:26:09.681 user 0m16.793s 00:26:09.681 sys 0m5.721s 00:26:09.681 21:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.681 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:09.681 ************************************ 00:26:09.681 END TEST nvmf_multicontroller 00:26:09.681 ************************************ 00:26:09.943 21:21:47 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:09.943 21:21:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:09.943 21:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.943 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:09.943 ************************************ 00:26:09.943 START TEST nvmf_aer 00:26:09.943 ************************************ 00:26:09.943 21:21:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:09.943 * Looking for test storage... 00:26:09.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.943 21:21:47 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.943 21:21:47 -- nvmf/common.sh@7 -- # uname -s 00:26:09.943 21:21:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.943 21:21:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.943 21:21:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.943 21:21:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.943 21:21:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.943 21:21:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.943 21:21:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.943 21:21:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.943 21:21:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.943 21:21:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.943 21:21:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:09.943 21:21:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:09.943 21:21:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.943 21:21:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.943 21:21:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.943 21:21:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.943 21:21:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.943 21:21:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.943 21:21:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.943 21:21:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.943 21:21:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.943 21:21:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.943 21:21:47 -- paths/export.sh@5 -- # export PATH 00:26:09.943 21:21:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.943 21:21:47 -- nvmf/common.sh@46 -- # : 0 00:26:09.943 21:21:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:09.943 21:21:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:09.943 21:21:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:09.943 21:21:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.943 21:21:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.943 21:21:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:09.943 21:21:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:09.943 21:21:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:09.944 21:21:47 -- host/aer.sh@11 -- # nvmftestinit 00:26:09.944 21:21:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:09.944 21:21:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.944 21:21:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:09.944 21:21:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:09.944 21:21:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:09.944 21:21:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.944 21:21:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.944 21:21:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.944 21:21:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:09.944 21:21:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:09.944 21:21:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:09.944 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:18.085 21:21:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:18.085 21:21:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:18.085 21:21:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:18.085 21:21:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:18.085 21:21:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:18.085 21:21:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:18.085 21:21:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:18.085 21:21:54 -- nvmf/common.sh@294 -- # net_devs=() 00:26:18.085 21:21:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:18.085 21:21:54 -- nvmf/common.sh@295 -- # e810=() 00:26:18.085 21:21:54 -- nvmf/common.sh@295 -- # local -ga e810 00:26:18.085 21:21:54 -- nvmf/common.sh@296 -- # x722=() 00:26:18.085 21:21:54 -- nvmf/common.sh@296 -- # local -ga x722 00:26:18.085 21:21:54 -- nvmf/common.sh@297 -- # mlx=() 00:26:18.085 21:21:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:18.085 21:21:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.085 21:21:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:18.085 21:21:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:18.085 21:21:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:18.085 21:21:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:18.085 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:18.085 21:21:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:18.085 21:21:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:18.085 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:18.085 21:21:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:18.085 21:21:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.085 21:21:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.085 21:21:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:18.085 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:18.085 21:21:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.085 21:21:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:18.085 21:21:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.085 21:21:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.085 21:21:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:18.085 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:18.085 21:21:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.085 21:21:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:18.085 21:21:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:18.085 21:21:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:18.085 21:21:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.085 21:21:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.085 21:21:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.085 21:21:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:18.085 21:21:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.085 21:21:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.085 21:21:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:18.085 21:21:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.085 21:21:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.085 21:21:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:18.085 21:21:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:18.085 21:21:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.085 21:21:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.085 21:21:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.085 21:21:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.085 21:21:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:18.085 21:21:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.085 21:21:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.085 21:21:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.085 21:21:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:18.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:26:18.085 00:26:18.085 --- 10.0.0.2 ping statistics --- 00:26:18.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.085 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:26:18.085 21:21:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:26:18.085 00:26:18.085 --- 10.0.0.1 ping statistics --- 00:26:18.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.086 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:26:18.086 21:21:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.086 21:21:55 -- nvmf/common.sh@410 -- # return 0 00:26:18.086 21:21:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:18.086 21:21:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.086 21:21:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:18.086 21:21:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:18.086 21:21:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.086 21:21:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:18.086 21:21:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:18.086 21:21:55 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:18.086 21:21:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:18.086 21:21:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 21:21:55 -- nvmf/common.sh@469 -- # nvmfpid=2498010 00:26:18.086 21:21:55 -- nvmf/common.sh@470 -- # waitforlisten 2498010 00:26:18.086 21:21:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.086 21:21:55 -- common/autotest_common.sh@819 -- # '[' -z 2498010 ']' 00:26:18.086 21:21:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.086 21:21:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:18.086 21:21:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.086 21:21:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 [2024-06-08 21:21:55.149162] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:18.086 [2024-06-08 21:21:55.149227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.086 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.086 [2024-06-08 21:21:55.218661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.086 [2024-06-08 21:21:55.292273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:18.086 [2024-06-08 21:21:55.292392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.086 [2024-06-08 21:21:55.292408] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.086 [2024-06-08 21:21:55.292415] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.086 [2024-06-08 21:21:55.292512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.086 [2024-06-08 21:21:55.292633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.086 [2024-06-08 21:21:55.292790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.086 [2024-06-08 21:21:55.292792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.086 21:21:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:18.086 21:21:55 -- common/autotest_common.sh@852 -- # return 0 00:26:18.086 21:21:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:18.086 21:21:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 21:21:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.086 21:21:55 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.086 21:21:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 [2024-06-08 21:21:55.969626] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.086 21:21:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:55 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:18.086 21:21:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 Malloc0 00:26:18.086 21:21:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:55 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:18.086 21:21:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:56 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:18.086 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:56 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.086 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 [2024-06-08 21:21:56.028881] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.086 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:56 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:18.086 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.086 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.086 [2024-06-08 21:21:56.040699] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:18.086 [ 00:26:18.086 { 00:26:18.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:18.086 "subtype": "Discovery", 00:26:18.086 "listen_addresses": [], 00:26:18.086 "allow_any_host": true, 00:26:18.086 "hosts": [] 00:26:18.086 }, 00:26:18.086 { 00:26:18.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.086 "subtype": "NVMe", 00:26:18.086 "listen_addresses": [ 00:26:18.086 { 00:26:18.086 "transport": "TCP", 00:26:18.086 "trtype": "TCP", 00:26:18.086 "adrfam": "IPv4", 00:26:18.086 "traddr": "10.0.0.2", 00:26:18.086 "trsvcid": "4420" 00:26:18.086 } 00:26:18.086 ], 00:26:18.086 "allow_any_host": true, 00:26:18.086 "hosts": [], 00:26:18.086 "serial_number": "SPDK00000000000001", 00:26:18.086 "model_number": "SPDK bdev Controller", 00:26:18.086 "max_namespaces": 2, 00:26:18.086 "min_cntlid": 1, 00:26:18.086 "max_cntlid": 65519, 00:26:18.086 "namespaces": [ 00:26:18.086 { 00:26:18.086 "nsid": 1, 00:26:18.086 "bdev_name": "Malloc0", 00:26:18.086 "name": "Malloc0", 00:26:18.086 "nguid": "3DB272B00B3740068B89F4400835E060", 00:26:18.086 "uuid": "3db272b0-0b37-4006-8b89-f4400835e060" 00:26:18.086 } 00:26:18.086 ] 00:26:18.086 } 00:26:18.086 ] 00:26:18.086 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.086 21:21:56 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:18.086 21:21:56 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:18.086 21:21:56 -- host/aer.sh@33 -- # aerpid=2498364 00:26:18.086 21:21:56 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:18.086 21:21:56 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:18.086 21:21:56 -- common/autotest_common.sh@1244 -- # local i=0 00:26:18.086 21:21:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:18.086 21:21:56 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:18.086 21:21:56 -- common/autotest_common.sh@1247 -- # i=1 00:26:18.086 21:21:56 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:18.086 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.086 21:21:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:18.086 21:21:56 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:18.086 21:21:56 -- common/autotest_common.sh@1247 -- # i=2 00:26:18.086 21:21:56 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:18.348 21:21:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:18.348 21:21:56 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:26:18.348 21:21:56 -- common/autotest_common.sh@1247 -- # i=3 00:26:18.348 21:21:56 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:18.348 21:21:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:18.348 21:21:56 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:18.348 21:21:56 -- common/autotest_common.sh@1255 -- # return 0 00:26:18.348 21:21:56 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:18.348 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.348 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.348 Malloc1 00:26:18.348 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.348 21:21:56 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:18.348 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.348 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.348 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.348 21:21:56 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:18.348 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.348 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.348 [ 00:26:18.348 { 00:26:18.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:18.348 "subtype": "Discovery", 00:26:18.348 "listen_addresses": [], 00:26:18.348 "allow_any_host": true, 00:26:18.348 "hosts": [] 00:26:18.348 }, 00:26:18.348 { 00:26:18.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.348 "subtype": "NVMe", 00:26:18.348 "listen_addresses": [ 00:26:18.348 { 00:26:18.348 "transport": "TCP", 00:26:18.348 "trtype": "TCP", 00:26:18.348 "adrfam": "IPv4", 00:26:18.348 "traddr": "10.0.0.2", 00:26:18.348 "trsvcid": "4420" 00:26:18.348 } 00:26:18.348 ], 00:26:18.348 "allow_any_host": true, 00:26:18.348 "hosts": [], 00:26:18.348 "serial_number": "SPDK00000000000001", 00:26:18.348 "model_number": "SPDK bdev Controller", 00:26:18.348 "max_namespaces": 2, 00:26:18.348 "min_cntlid": 1, 00:26:18.348 "max_cntlid": 65519, 00:26:18.609 "namespaces": [ 00:26:18.609 { 00:26:18.609 "nsid": 1, 00:26:18.609 "bdev_name": "Malloc0", 00:26:18.609 "name": "Malloc0", 00:26:18.609 "nguid": "3DB272B00B3740068B89F4400835E060", 00:26:18.609 Asynchronous Event Request test 00:26:18.609 Attaching to 10.0.0.2 00:26:18.609 Attached to 10.0.0.2 00:26:18.609 Registering asynchronous event callbacks... 00:26:18.609 Starting namespace attribute notice tests for all controllers... 00:26:18.609 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:18.609 aer_cb - Changed Namespace 00:26:18.609 Cleaning up... 00:26:18.609 "uuid": "3db272b0-0b37-4006-8b89-f4400835e060" 00:26:18.609 }, 00:26:18.609 { 00:26:18.609 "nsid": 2, 00:26:18.609 "bdev_name": "Malloc1", 00:26:18.609 "name": "Malloc1", 00:26:18.609 "nguid": "E9661105D92547A58BCB6A0A257CD3DE", 00:26:18.609 "uuid": "e9661105-d925-47a5-8bcb-6a0a257cd3de" 00:26:18.609 } 00:26:18.609 ] 00:26:18.609 } 00:26:18.609 ] 00:26:18.609 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.609 21:21:56 -- host/aer.sh@43 -- # wait 2498364 00:26:18.609 21:21:56 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:18.609 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.609 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.609 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.609 21:21:56 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:18.609 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.609 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.609 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.609 21:21:56 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.609 21:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.609 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:18.609 21:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.609 21:21:56 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:18.609 21:21:56 -- host/aer.sh@51 -- # nvmftestfini 00:26:18.609 21:21:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:18.610 21:21:56 -- nvmf/common.sh@116 -- # sync 00:26:18.610 21:21:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:18.610 21:21:56 -- nvmf/common.sh@119 -- # set +e 00:26:18.610 21:21:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:18.610 21:21:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:18.610 rmmod nvme_tcp 00:26:18.610 rmmod nvme_fabrics 00:26:18.610 rmmod nvme_keyring 00:26:18.610 21:21:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:18.610 21:21:56 -- nvmf/common.sh@123 -- # set -e 00:26:18.610 21:21:56 -- nvmf/common.sh@124 -- # return 0 00:26:18.610 21:21:56 -- nvmf/common.sh@477 -- # '[' -n 2498010 ']' 00:26:18.610 21:21:56 -- nvmf/common.sh@478 -- # killprocess 2498010 00:26:18.610 21:21:56 -- common/autotest_common.sh@926 -- # '[' -z 2498010 ']' 00:26:18.610 21:21:56 -- common/autotest_common.sh@930 -- # kill -0 2498010 00:26:18.610 21:21:56 -- common/autotest_common.sh@931 -- # uname 00:26:18.610 21:21:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:18.610 21:21:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2498010 00:26:18.610 21:21:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:18.610 21:21:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:18.610 21:21:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2498010' 00:26:18.610 killing process with pid 2498010 00:26:18.610 21:21:56 -- common/autotest_common.sh@945 -- # kill 2498010 00:26:18.610 [2024-06-08 21:21:56.619505] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:18.610 21:21:56 -- common/autotest_common.sh@950 -- # wait 2498010 00:26:18.871 21:21:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:18.871 21:21:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:18.871 21:21:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:18.871 21:21:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.871 21:21:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:18.871 21:21:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.871 21:21:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.871 21:21:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.784 21:21:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:20.784 00:26:20.784 real 0m11.014s 00:26:20.784 user 0m7.958s 00:26:20.784 sys 0m5.677s 00:26:20.784 21:21:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.784 21:21:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.784 ************************************ 00:26:20.784 END TEST nvmf_aer 00:26:20.784 ************************************ 00:26:20.784 21:21:58 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:20.784 21:21:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:20.784 21:21:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:20.784 21:21:58 -- common/autotest_common.sh@10 -- # set +x 00:26:21.045 ************************************ 00:26:21.045 START TEST nvmf_async_init 00:26:21.045 ************************************ 00:26:21.045 21:21:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:21.045 * Looking for test storage... 00:26:21.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.045 21:21:58 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.045 21:21:58 -- nvmf/common.sh@7 -- # uname -s 00:26:21.046 21:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.046 21:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.046 21:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.046 21:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.046 21:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.046 21:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.046 21:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.046 21:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.046 21:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.046 21:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.046 21:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.046 21:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.046 21:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.046 21:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.046 21:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.046 21:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.046 21:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.046 21:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.046 21:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.046 21:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.046 21:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.046 21:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.046 21:21:58 -- paths/export.sh@5 -- # export PATH 00:26:21.046 21:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.046 21:21:58 -- nvmf/common.sh@46 -- # : 0 00:26:21.046 21:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:21.046 21:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:21.046 21:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:21.046 21:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.046 21:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.046 21:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:21.046 21:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:21.046 21:21:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:21.046 21:21:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:21.046 21:21:59 -- host/async_init.sh@14 -- # null_block_size=512 00:26:21.046 21:21:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:21.046 21:21:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:21.046 21:21:59 -- host/async_init.sh@20 -- # uuidgen 00:26:21.046 21:21:59 -- host/async_init.sh@20 -- # tr -d - 00:26:21.046 21:21:59 -- host/async_init.sh@20 -- # nguid=ba57d79174524cab943325c685313a82 00:26:21.046 21:21:59 -- host/async_init.sh@22 -- # nvmftestinit 00:26:21.046 21:21:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:21.046 21:21:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.046 21:21:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:21.046 21:21:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:21.046 21:21:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:21.046 21:21:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.046 21:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.046 21:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.046 21:21:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:21.046 21:21:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:21.046 21:21:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:21.046 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:26:27.633 21:22:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:27.633 21:22:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:27.633 21:22:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:27.633 21:22:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:27.633 21:22:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:27.633 21:22:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:27.633 21:22:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:27.633 21:22:05 -- nvmf/common.sh@294 -- # net_devs=() 00:26:27.633 21:22:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:27.633 21:22:05 -- nvmf/common.sh@295 -- # e810=() 00:26:27.633 21:22:05 -- nvmf/common.sh@295 -- # local -ga e810 00:26:27.633 21:22:05 -- nvmf/common.sh@296 -- # x722=() 00:26:27.633 21:22:05 -- nvmf/common.sh@296 -- # local -ga x722 00:26:27.633 21:22:05 -- nvmf/common.sh@297 -- # mlx=() 00:26:27.633 21:22:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:27.633 21:22:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.633 21:22:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:27.633 21:22:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:27.633 21:22:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:27.633 21:22:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:27.634 21:22:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.634 21:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:27.634 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:27.634 21:22:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.634 21:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:27.634 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:27.634 21:22:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.634 21:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.634 21:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.634 21:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:27.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:27.634 21:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.634 21:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.634 21:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.634 21:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.634 21:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:27.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:27.634 21:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.634 21:22:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:27.634 21:22:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:27.634 21:22:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:27.634 21:22:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.634 21:22:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.634 21:22:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.634 21:22:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:27.634 21:22:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.634 21:22:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.634 21:22:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:27.634 21:22:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.634 21:22:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.634 21:22:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:27.634 21:22:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:27.634 21:22:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.634 21:22:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.895 21:22:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.895 21:22:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.895 21:22:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:27.895 21:22:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.895 21:22:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.895 21:22:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.156 21:22:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:28.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:26:28.156 00:26:28.156 --- 10.0.0.2 ping statistics --- 00:26:28.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.156 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:26:28.156 21:22:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:26:28.156 00:26:28.156 --- 10.0.0.1 ping statistics --- 00:26:28.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.156 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:26:28.156 21:22:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.156 21:22:06 -- nvmf/common.sh@410 -- # return 0 00:26:28.156 21:22:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:28.156 21:22:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.156 21:22:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:28.156 21:22:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:28.156 21:22:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.156 21:22:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:28.156 21:22:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:28.156 21:22:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:28.156 21:22:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:28.156 21:22:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.156 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:28.156 21:22:06 -- nvmf/common.sh@469 -- # nvmfpid=2502381 00:26:28.156 21:22:06 -- nvmf/common.sh@470 -- # waitforlisten 2502381 00:26:28.156 21:22:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:28.156 21:22:06 -- common/autotest_common.sh@819 -- # '[' -z 2502381 ']' 00:26:28.156 21:22:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.156 21:22:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.156 21:22:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.156 21:22:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.156 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:28.156 [2024-06-08 21:22:06.098265] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:28.156 [2024-06-08 21:22:06.098336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.156 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.156 [2024-06-08 21:22:06.168493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.156 [2024-06-08 21:22:06.241421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:28.156 [2024-06-08 21:22:06.241542] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.156 [2024-06-08 21:22:06.241551] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.156 [2024-06-08 21:22:06.241558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.156 [2024-06-08 21:22:06.241581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.101 21:22:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.101 21:22:06 -- common/autotest_common.sh@852 -- # return 0 00:26:29.101 21:22:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:29.101 21:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 21:22:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.101 21:22:06 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 [2024-06-08 21:22:06.904712] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 null0 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ba57d79174524cab943325c685313a82 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 [2024-06-08 21:22:06.960955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.101 21:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.101 21:22:06 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:29.101 21:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.101 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:26:29.362 nvme0n1 00:26:29.362 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.362 21:22:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.362 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.362 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.362 [ 00:26:29.362 { 00:26:29.362 "name": "nvme0n1", 00:26:29.362 "aliases": [ 00:26:29.362 "ba57d791-7452-4cab-9433-25c685313a82" 00:26:29.362 ], 00:26:29.362 "product_name": "NVMe disk", 00:26:29.362 "block_size": 512, 00:26:29.362 "num_blocks": 2097152, 00:26:29.362 "uuid": "ba57d791-7452-4cab-9433-25c685313a82", 00:26:29.362 "assigned_rate_limits": { 00:26:29.362 "rw_ios_per_sec": 0, 00:26:29.362 "rw_mbytes_per_sec": 0, 00:26:29.362 "r_mbytes_per_sec": 0, 00:26:29.362 "w_mbytes_per_sec": 0 00:26:29.362 }, 00:26:29.362 "claimed": false, 00:26:29.362 "zoned": false, 00:26:29.362 "supported_io_types": { 00:26:29.362 "read": true, 00:26:29.362 "write": true, 00:26:29.362 "unmap": false, 00:26:29.362 "write_zeroes": true, 00:26:29.362 "flush": true, 00:26:29.362 "reset": true, 00:26:29.362 "compare": true, 00:26:29.362 "compare_and_write": true, 00:26:29.362 "abort": true, 00:26:29.362 "nvme_admin": true, 00:26:29.362 "nvme_io": true 00:26:29.362 }, 00:26:29.362 "driver_specific": { 00:26:29.362 "nvme": [ 00:26:29.362 { 00:26:29.362 "trid": { 00:26:29.362 "trtype": "TCP", 00:26:29.362 "adrfam": "IPv4", 00:26:29.362 "traddr": "10.0.0.2", 00:26:29.362 "trsvcid": "4420", 00:26:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.362 }, 00:26:29.362 "ctrlr_data": { 00:26:29.362 "cntlid": 1, 00:26:29.362 "vendor_id": "0x8086", 00:26:29.362 "model_number": "SPDK bdev Controller", 00:26:29.362 "serial_number": "00000000000000000000", 00:26:29.362 "firmware_revision": "24.01.1", 00:26:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.362 "oacs": { 00:26:29.362 "security": 0, 00:26:29.362 "format": 0, 00:26:29.362 "firmware": 0, 00:26:29.362 "ns_manage": 0 00:26:29.362 }, 00:26:29.362 "multi_ctrlr": true, 00:26:29.362 "ana_reporting": false 00:26:29.362 }, 00:26:29.362 "vs": { 00:26:29.362 "nvme_version": "1.3" 00:26:29.362 }, 00:26:29.362 "ns_data": { 00:26:29.362 "id": 1, 00:26:29.362 "can_share": true 00:26:29.362 } 00:26:29.362 } 00:26:29.362 ], 00:26:29.362 "mp_policy": "active_passive" 00:26:29.362 } 00:26:29.362 } 00:26:29.362 ] 00:26:29.362 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.362 21:22:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:29.362 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.362 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.362 [2024-06-08 21:22:07.225526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:29.363 [2024-06-08 21:22:07.225583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13a50 (9): Bad file descriptor 00:26:29.363 [2024-06-08 21:22:07.357492] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 [ 00:26:29.363 { 00:26:29.363 "name": "nvme0n1", 00:26:29.363 "aliases": [ 00:26:29.363 "ba57d791-7452-4cab-9433-25c685313a82" 00:26:29.363 ], 00:26:29.363 "product_name": "NVMe disk", 00:26:29.363 "block_size": 512, 00:26:29.363 "num_blocks": 2097152, 00:26:29.363 "uuid": "ba57d791-7452-4cab-9433-25c685313a82", 00:26:29.363 "assigned_rate_limits": { 00:26:29.363 "rw_ios_per_sec": 0, 00:26:29.363 "rw_mbytes_per_sec": 0, 00:26:29.363 "r_mbytes_per_sec": 0, 00:26:29.363 "w_mbytes_per_sec": 0 00:26:29.363 }, 00:26:29.363 "claimed": false, 00:26:29.363 "zoned": false, 00:26:29.363 "supported_io_types": { 00:26:29.363 "read": true, 00:26:29.363 "write": true, 00:26:29.363 "unmap": false, 00:26:29.363 "write_zeroes": true, 00:26:29.363 "flush": true, 00:26:29.363 "reset": true, 00:26:29.363 "compare": true, 00:26:29.363 "compare_and_write": true, 00:26:29.363 "abort": true, 00:26:29.363 "nvme_admin": true, 00:26:29.363 "nvme_io": true 00:26:29.363 }, 00:26:29.363 "driver_specific": { 00:26:29.363 "nvme": [ 00:26:29.363 { 00:26:29.363 "trid": { 00:26:29.363 "trtype": "TCP", 00:26:29.363 "adrfam": "IPv4", 00:26:29.363 "traddr": "10.0.0.2", 00:26:29.363 "trsvcid": "4420", 00:26:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.363 }, 00:26:29.363 "ctrlr_data": { 00:26:29.363 "cntlid": 2, 00:26:29.363 "vendor_id": "0x8086", 00:26:29.363 "model_number": "SPDK bdev Controller", 00:26:29.363 "serial_number": "00000000000000000000", 00:26:29.363 "firmware_revision": "24.01.1", 00:26:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.363 "oacs": { 00:26:29.363 "security": 0, 00:26:29.363 "format": 0, 00:26:29.363 "firmware": 0, 00:26:29.363 "ns_manage": 0 00:26:29.363 }, 00:26:29.363 "multi_ctrlr": true, 00:26:29.363 "ana_reporting": false 00:26:29.363 }, 00:26:29.363 "vs": { 00:26:29.363 "nvme_version": "1.3" 00:26:29.363 }, 00:26:29.363 "ns_data": { 00:26:29.363 "id": 1, 00:26:29.363 "can_share": true 00:26:29.363 } 00:26:29.363 } 00:26:29.363 ], 00:26:29.363 "mp_policy": "active_passive" 00:26:29.363 } 00:26:29.363 } 00:26:29.363 ] 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@53 -- # mktemp 00:26:29.363 21:22:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l5SH0SfhJG 00:26:29.363 21:22:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:29.363 21:22:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l5SH0SfhJG 00:26:29.363 21:22:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 [2024-06-08 21:22:07.422160] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:29.363 [2024-06-08 21:22:07.422272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l5SH0SfhJG 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.363 21:22:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l5SH0SfhJG 00:26:29.363 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.363 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 [2024-06-08 21:22:07.446219] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.624 nvme0n1 00:26:29.624 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.624 21:22:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.624 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.624 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.624 [ 00:26:29.624 { 00:26:29.624 "name": "nvme0n1", 00:26:29.624 "aliases": [ 00:26:29.624 "ba57d791-7452-4cab-9433-25c685313a82" 00:26:29.624 ], 00:26:29.624 "product_name": "NVMe disk", 00:26:29.624 "block_size": 512, 00:26:29.624 "num_blocks": 2097152, 00:26:29.624 "uuid": "ba57d791-7452-4cab-9433-25c685313a82", 00:26:29.624 "assigned_rate_limits": { 00:26:29.624 "rw_ios_per_sec": 0, 00:26:29.624 "rw_mbytes_per_sec": 0, 00:26:29.624 "r_mbytes_per_sec": 0, 00:26:29.624 "w_mbytes_per_sec": 0 00:26:29.624 }, 00:26:29.624 "claimed": false, 00:26:29.624 "zoned": false, 00:26:29.624 "supported_io_types": { 00:26:29.624 "read": true, 00:26:29.624 "write": true, 00:26:29.624 "unmap": false, 00:26:29.624 "write_zeroes": true, 00:26:29.624 "flush": true, 00:26:29.624 "reset": true, 00:26:29.624 "compare": true, 00:26:29.624 "compare_and_write": true, 00:26:29.624 "abort": true, 00:26:29.624 "nvme_admin": true, 00:26:29.624 "nvme_io": true 00:26:29.624 }, 00:26:29.624 "driver_specific": { 00:26:29.624 "nvme": [ 00:26:29.624 { 00:26:29.624 "trid": { 00:26:29.624 "trtype": "TCP", 00:26:29.624 "adrfam": "IPv4", 00:26:29.624 "traddr": "10.0.0.2", 00:26:29.624 "trsvcid": "4421", 00:26:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.624 }, 00:26:29.624 "ctrlr_data": { 00:26:29.624 "cntlid": 3, 00:26:29.624 "vendor_id": "0x8086", 00:26:29.624 "model_number": "SPDK bdev Controller", 00:26:29.624 "serial_number": "00000000000000000000", 00:26:29.624 "firmware_revision": "24.01.1", 00:26:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.624 "oacs": { 00:26:29.624 "security": 0, 00:26:29.624 "format": 0, 00:26:29.624 "firmware": 0, 00:26:29.624 "ns_manage": 0 00:26:29.624 }, 00:26:29.624 "multi_ctrlr": true, 00:26:29.624 "ana_reporting": false 00:26:29.624 }, 00:26:29.624 "vs": { 00:26:29.624 "nvme_version": "1.3" 00:26:29.624 }, 00:26:29.624 "ns_data": { 00:26:29.624 "id": 1, 00:26:29.624 "can_share": true 00:26:29.624 } 00:26:29.624 } 00:26:29.624 ], 00:26:29.624 "mp_policy": "active_passive" 00:26:29.624 } 00:26:29.624 } 00:26:29.624 ] 00:26:29.624 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.624 21:22:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.624 21:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.624 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.624 21:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.624 21:22:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.l5SH0SfhJG 00:26:29.624 21:22:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:29.624 21:22:07 -- host/async_init.sh@78 -- # nvmftestfini 00:26:29.624 21:22:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:29.625 21:22:07 -- nvmf/common.sh@116 -- # sync 00:26:29.625 21:22:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:29.625 21:22:07 -- nvmf/common.sh@119 -- # set +e 00:26:29.625 21:22:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:29.625 21:22:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:29.625 rmmod nvme_tcp 00:26:29.625 rmmod nvme_fabrics 00:26:29.625 rmmod nvme_keyring 00:26:29.625 21:22:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:29.625 21:22:07 -- nvmf/common.sh@123 -- # set -e 00:26:29.625 21:22:07 -- nvmf/common.sh@124 -- # return 0 00:26:29.625 21:22:07 -- nvmf/common.sh@477 -- # '[' -n 2502381 ']' 00:26:29.625 21:22:07 -- nvmf/common.sh@478 -- # killprocess 2502381 00:26:29.625 21:22:07 -- common/autotest_common.sh@926 -- # '[' -z 2502381 ']' 00:26:29.625 21:22:07 -- common/autotest_common.sh@930 -- # kill -0 2502381 00:26:29.625 21:22:07 -- common/autotest_common.sh@931 -- # uname 00:26:29.625 21:22:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:29.625 21:22:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2502381 00:26:29.625 21:22:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:29.625 21:22:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:29.625 21:22:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2502381' 00:26:29.625 killing process with pid 2502381 00:26:29.625 21:22:07 -- common/autotest_common.sh@945 -- # kill 2502381 00:26:29.625 21:22:07 -- common/autotest_common.sh@950 -- # wait 2502381 00:26:29.886 21:22:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:29.886 21:22:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:29.886 21:22:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:29.886 21:22:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.886 21:22:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:29.886 21:22:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.886 21:22:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.886 21:22:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.796 21:22:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:31.797 00:26:31.797 real 0m10.996s 00:26:31.797 user 0m3.843s 00:26:31.797 sys 0m5.603s 00:26:31.797 21:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.797 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:26:31.797 ************************************ 00:26:31.797 END TEST nvmf_async_init 00:26:31.797 ************************************ 00:26:32.097 21:22:09 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:32.097 21:22:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:32.097 21:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:32.097 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:26:32.097 ************************************ 00:26:32.097 START TEST dma 00:26:32.097 ************************************ 00:26:32.097 21:22:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:32.097 * Looking for test storage... 00:26:32.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.097 21:22:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.097 21:22:10 -- nvmf/common.sh@7 -- # uname -s 00:26:32.097 21:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.097 21:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.097 21:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.097 21:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.098 21:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.098 21:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.098 21:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.098 21:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.098 21:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.098 21:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.098 21:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.098 21:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.098 21:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.098 21:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.098 21:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.098 21:22:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.098 21:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.098 21:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.098 21:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.098 21:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.098 21:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.098 21:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.098 21:22:10 -- paths/export.sh@5 -- # export PATH 00:26:32.098 21:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.098 21:22:10 -- nvmf/common.sh@46 -- # : 0 00:26:32.098 21:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:32.098 21:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:32.098 21:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:32.098 21:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.098 21:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.098 21:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:32.098 21:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:32.098 21:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:32.098 21:22:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:32.098 21:22:10 -- host/dma.sh@13 -- # exit 0 00:26:32.098 00:26:32.098 real 0m0.128s 00:26:32.098 user 0m0.055s 00:26:32.098 sys 0m0.081s 00:26:32.098 21:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.098 21:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:32.098 ************************************ 00:26:32.098 END TEST dma 00:26:32.098 ************************************ 00:26:32.098 21:22:10 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:32.098 21:22:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:32.098 21:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:32.098 21:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:32.098 ************************************ 00:26:32.098 START TEST nvmf_identify 00:26:32.098 ************************************ 00:26:32.098 21:22:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:32.363 * Looking for test storage... 00:26:32.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.363 21:22:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.363 21:22:10 -- nvmf/common.sh@7 -- # uname -s 00:26:32.363 21:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.363 21:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.363 21:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.363 21:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.363 21:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.363 21:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.363 21:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.363 21:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.363 21:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.363 21:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.363 21:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.363 21:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.363 21:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.363 21:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.363 21:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.363 21:22:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.363 21:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.363 21:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.363 21:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.363 21:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.363 21:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.363 21:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.363 21:22:10 -- paths/export.sh@5 -- # export PATH 00:26:32.363 21:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.363 21:22:10 -- nvmf/common.sh@46 -- # : 0 00:26:32.363 21:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:32.363 21:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:32.363 21:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:32.363 21:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.363 21:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.363 21:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:32.363 21:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:32.363 21:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:32.363 21:22:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.363 21:22:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.363 21:22:10 -- host/identify.sh@14 -- # nvmftestinit 00:26:32.363 21:22:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:32.363 21:22:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.363 21:22:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:32.363 21:22:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:32.363 21:22:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:32.363 21:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.363 21:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.363 21:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.363 21:22:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:32.363 21:22:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:32.363 21:22:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:32.363 21:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:38.948 21:22:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:38.948 21:22:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:38.948 21:22:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:38.948 21:22:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:38.948 21:22:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:38.948 21:22:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:38.948 21:22:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:38.948 21:22:16 -- nvmf/common.sh@294 -- # net_devs=() 00:26:38.948 21:22:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:38.948 21:22:16 -- nvmf/common.sh@295 -- # e810=() 00:26:38.948 21:22:16 -- nvmf/common.sh@295 -- # local -ga e810 00:26:38.948 21:22:16 -- nvmf/common.sh@296 -- # x722=() 00:26:38.948 21:22:16 -- nvmf/common.sh@296 -- # local -ga x722 00:26:38.948 21:22:16 -- nvmf/common.sh@297 -- # mlx=() 00:26:38.948 21:22:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:38.948 21:22:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.948 21:22:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:38.948 21:22:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:38.948 21:22:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:38.948 21:22:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:38.948 21:22:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:38.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:38.948 21:22:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:38.948 21:22:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:38.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:38.948 21:22:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:38.948 21:22:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:38.948 21:22:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:38.948 21:22:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.948 21:22:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:38.948 21:22:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.948 21:22:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:38.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:38.948 21:22:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.948 21:22:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:38.948 21:22:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.948 21:22:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:38.948 21:22:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.948 21:22:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:38.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:38.948 21:22:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.948 21:22:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:38.948 21:22:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:38.949 21:22:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:38.949 21:22:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:38.949 21:22:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:38.949 21:22:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.949 21:22:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.949 21:22:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.949 21:22:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:38.949 21:22:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.949 21:22:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.949 21:22:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:38.949 21:22:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.949 21:22:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.949 21:22:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:38.949 21:22:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:38.949 21:22:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.949 21:22:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.949 21:22:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.949 21:22:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.949 21:22:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:38.949 21:22:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.210 21:22:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.210 21:22:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.210 21:22:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:39.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:26:39.210 00:26:39.210 --- 10.0.0.2 ping statistics --- 00:26:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.210 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:26:39.210 21:22:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:26:39.210 00:26:39.210 --- 10.0.0.1 ping statistics --- 00:26:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.210 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:26:39.210 21:22:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.210 21:22:17 -- nvmf/common.sh@410 -- # return 0 00:26:39.210 21:22:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:39.210 21:22:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.210 21:22:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:39.210 21:22:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:39.210 21:22:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.210 21:22:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:39.210 21:22:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:39.210 21:22:17 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:39.210 21:22:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:39.210 21:22:17 -- common/autotest_common.sh@10 -- # set +x 00:26:39.210 21:22:17 -- host/identify.sh@19 -- # nvmfpid=2506901 00:26:39.210 21:22:17 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.210 21:22:17 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:39.210 21:22:17 -- host/identify.sh@23 -- # waitforlisten 2506901 00:26:39.211 21:22:17 -- common/autotest_common.sh@819 -- # '[' -z 2506901 ']' 00:26:39.211 21:22:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.211 21:22:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:39.211 21:22:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.211 21:22:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:39.211 21:22:17 -- common/autotest_common.sh@10 -- # set +x 00:26:39.211 [2024-06-08 21:22:17.223044] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:39.211 [2024-06-08 21:22:17.223110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.211 [2024-06-08 21:22:17.296589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.472 [2024-06-08 21:22:17.371964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:39.472 [2024-06-08 21:22:17.372103] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.472 [2024-06-08 21:22:17.372114] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.472 [2024-06-08 21:22:17.372122] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.472 [2024-06-08 21:22:17.372290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.472 [2024-06-08 21:22:17.372499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.472 [2024-06-08 21:22:17.372830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.472 [2024-06-08 21:22:17.372831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.043 21:22:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:40.043 21:22:17 -- common/autotest_common.sh@852 -- # return 0 00:26:40.043 21:22:17 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.043 21:22:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:17 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 [2024-06-08 21:22:18.004472] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:40.043 21:22:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 21:22:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 Malloc0 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 [2024-06-08 21:22:18.103939] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.043 21:22:18 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:40.043 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.043 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.043 [2024-06-08 21:22:18.127753] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:40.043 [ 00:26:40.043 { 00:26:40.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:40.043 "subtype": "Discovery", 00:26:40.043 "listen_addresses": [ 00:26:40.043 { 00:26:40.043 "transport": "TCP", 00:26:40.043 "trtype": "TCP", 00:26:40.043 "adrfam": "IPv4", 00:26:40.043 "traddr": "10.0.0.2", 00:26:40.043 "trsvcid": "4420" 00:26:40.043 } 00:26:40.043 ], 00:26:40.043 "allow_any_host": true, 00:26:40.043 "hosts": [] 00:26:40.043 }, 00:26:40.043 { 00:26:40.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.308 "subtype": "NVMe", 00:26:40.308 "listen_addresses": [ 00:26:40.308 { 00:26:40.308 "transport": "TCP", 00:26:40.308 "trtype": "TCP", 00:26:40.308 "adrfam": "IPv4", 00:26:40.308 "traddr": "10.0.0.2", 00:26:40.308 "trsvcid": "4420" 00:26:40.308 } 00:26:40.308 ], 00:26:40.308 "allow_any_host": true, 00:26:40.308 "hosts": [], 00:26:40.308 "serial_number": "SPDK00000000000001", 00:26:40.308 "model_number": "SPDK bdev Controller", 00:26:40.308 "max_namespaces": 32, 00:26:40.308 "min_cntlid": 1, 00:26:40.308 "max_cntlid": 65519, 00:26:40.308 "namespaces": [ 00:26:40.308 { 00:26:40.308 "nsid": 1, 00:26:40.308 "bdev_name": "Malloc0", 00:26:40.308 "name": "Malloc0", 00:26:40.308 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:40.308 "eui64": "ABCDEF0123456789", 00:26:40.308 "uuid": "3788c50e-b34f-4d31-8411-925a08b61788" 00:26:40.308 } 00:26:40.308 ] 00:26:40.308 } 00:26:40.308 ] 00:26:40.308 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.308 21:22:18 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:40.308 [2024-06-08 21:22:18.164951] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.308 [2024-06-08 21:22:18.164990] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507147 ] 00:26:40.308 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.308 [2024-06-08 21:22:18.198033] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:40.308 [2024-06-08 21:22:18.198077] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:40.308 [2024-06-08 21:22:18.198081] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:40.308 [2024-06-08 21:22:18.198093] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:40.308 [2024-06-08 21:22:18.198100] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:40.308 [2024-06-08 21:22:18.201434] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:40.308 [2024-06-08 21:22:18.201465] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20fe9e0 0 00:26:40.308 [2024-06-08 21:22:18.209409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:40.308 [2024-06-08 21:22:18.209425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:40.308 [2024-06-08 21:22:18.209430] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:40.308 [2024-06-08 21:22:18.209433] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:40.308 [2024-06-08 21:22:18.209470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.209476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.209480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.308 [2024-06-08 21:22:18.209493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:40.308 [2024-06-08 21:22:18.209510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.308 [2024-06-08 21:22:18.217411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.308 [2024-06-08 21:22:18.217420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.308 [2024-06-08 21:22:18.217423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.308 [2024-06-08 21:22:18.217438] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:40.308 [2024-06-08 21:22:18.217444] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:40.308 [2024-06-08 21:22:18.217449] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:40.308 [2024-06-08 21:22:18.217464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.308 [2024-06-08 21:22:18.217479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.308 [2024-06-08 21:22:18.217491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.308 [2024-06-08 21:22:18.217712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.308 [2024-06-08 21:22:18.217719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.308 [2024-06-08 21:22:18.217722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.308 [2024-06-08 21:22:18.217735] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:40.308 [2024-06-08 21:22:18.217742] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:40.308 [2024-06-08 21:22:18.217749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.217756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.308 [2024-06-08 21:22:18.217763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.308 [2024-06-08 21:22:18.217774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.308 [2024-06-08 21:22:18.218020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.308 [2024-06-08 21:22:18.218026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.308 [2024-06-08 21:22:18.218029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.218033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.308 [2024-06-08 21:22:18.218042] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:40.308 [2024-06-08 21:22:18.218050] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:40.308 [2024-06-08 21:22:18.218056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.218060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.308 [2024-06-08 21:22:18.218063] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.308 [2024-06-08 21:22:18.218070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.308 [2024-06-08 21:22:18.218080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.308 [2024-06-08 21:22:18.218284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.308 [2024-06-08 21:22:18.218290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.218294] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218297] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.218303] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:40.309 [2024-06-08 21:22:18.218311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.218325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.309 [2024-06-08 21:22:18.218335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.309 [2024-06-08 21:22:18.218670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.309 [2024-06-08 21:22:18.218677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.218680] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218684] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.218689] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:40.309 [2024-06-08 21:22:18.218693] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:40.309 [2024-06-08 21:22:18.218701] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:40.309 [2024-06-08 21:22:18.218806] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:40.309 [2024-06-08 21:22:18.218811] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:40.309 [2024-06-08 21:22:18.218819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.218826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.218833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.309 [2024-06-08 21:22:18.218843] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.309 [2024-06-08 21:22:18.219041] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.309 [2024-06-08 21:22:18.219050] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.219053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.219062] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:40.309 [2024-06-08 21:22:18.219071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219075] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219078] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.219085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.309 [2024-06-08 21:22:18.219094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.309 [2024-06-08 21:22:18.219318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.309 [2024-06-08 21:22:18.219325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.219328] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219332] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.219337] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:40.309 [2024-06-08 21:22:18.219342] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:40.309 [2024-06-08 21:22:18.219349] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:40.309 [2024-06-08 21:22:18.219357] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:40.309 [2024-06-08 21:22:18.219366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.219380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.309 [2024-06-08 21:22:18.219390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.309 [2024-06-08 21:22:18.219647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.309 [2024-06-08 21:22:18.219654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.309 [2024-06-08 21:22:18.219658] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219661] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe9e0): datao=0, datal=4096, cccid=0 00:26:40.309 [2024-06-08 21:22:18.219666] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2166730) on tqpair(0x20fe9e0): expected_datao=0, payload_size=4096 00:26:40.309 [2024-06-08 21:22:18.219723] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.219728] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.309 [2024-06-08 21:22:18.265421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.265425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.265438] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:40.309 [2024-06-08 21:22:18.265449] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:40.309 [2024-06-08 21:22:18.265454] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:40.309 [2024-06-08 21:22:18.265459] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:40.309 [2024-06-08 21:22:18.265463] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:40.309 [2024-06-08 21:22:18.265468] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:40.309 [2024-06-08 21:22:18.265476] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:40.309 [2024-06-08 21:22:18.265483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.265499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:40.309 [2024-06-08 21:22:18.265511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.309 [2024-06-08 21:22:18.265724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.309 [2024-06-08 21:22:18.265730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.309 [2024-06-08 21:22:18.265734] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166730) on tqpair=0x20fe9e0 00:26:40.309 [2024-06-08 21:22:18.265745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.265759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.309 [2024-06-08 21:22:18.265765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.265777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.309 [2024-06-08 21:22:18.265783] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.309 [2024-06-08 21:22:18.265790] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20fe9e0) 00:26:40.309 [2024-06-08 21:22:18.265796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.310 [2024-06-08 21:22:18.265801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.265805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.265808] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.265814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.310 [2024-06-08 21:22:18.265819] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:40.310 [2024-06-08 21:22:18.265833] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:40.310 [2024-06-08 21:22:18.265840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.265843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.265847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.265854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.310 [2024-06-08 21:22:18.265866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166730, cid 0, qid 0 00:26:40.310 [2024-06-08 21:22:18.265871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166890, cid 1, qid 0 00:26:40.310 [2024-06-08 21:22:18.265876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21669f0, cid 2, qid 0 00:26:40.310 [2024-06-08 21:22:18.265880] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.310 [2024-06-08 21:22:18.265885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166cb0, cid 4, qid 0 00:26:40.310 [2024-06-08 21:22:18.266161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.310 [2024-06-08 21:22:18.266168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.310 [2024-06-08 21:22:18.266171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266175] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166cb0) on tqpair=0x20fe9e0 00:26:40.310 [2024-06-08 21:22:18.266180] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:40.310 [2024-06-08 21:22:18.266185] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:40.310 [2024-06-08 21:22:18.266196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.266210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.310 [2024-06-08 21:22:18.266220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166cb0, cid 4, qid 0 00:26:40.310 [2024-06-08 21:22:18.266464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.310 [2024-06-08 21:22:18.266471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.310 [2024-06-08 21:22:18.266474] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266478] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe9e0): datao=0, datal=4096, cccid=4 00:26:40.310 [2024-06-08 21:22:18.266482] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2166cb0) on tqpair(0x20fe9e0): expected_datao=0, payload_size=4096 00:26:40.310 [2024-06-08 21:22:18.266490] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266494] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.310 [2024-06-08 21:22:18.266642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.310 [2024-06-08 21:22:18.266646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166cb0) on tqpair=0x20fe9e0 00:26:40.310 [2024-06-08 21:22:18.266661] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:40.310 [2024-06-08 21:22:18.266678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.266695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.310 [2024-06-08 21:22:18.266702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.266709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.266715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.310 [2024-06-08 21:22:18.266733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166cb0, cid 4, qid 0 00:26:40.310 [2024-06-08 21:22:18.266738] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166e10, cid 5, qid 0 00:26:40.310 [2024-06-08 21:22:18.267030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.310 [2024-06-08 21:22:18.267037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.310 [2024-06-08 21:22:18.267040] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.267043] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe9e0): datao=0, datal=1024, cccid=4 00:26:40.310 [2024-06-08 21:22:18.267048] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2166cb0) on tqpair(0x20fe9e0): expected_datao=0, payload_size=1024 00:26:40.310 [2024-06-08 21:22:18.267055] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.267058] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.267064] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.310 [2024-06-08 21:22:18.267070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.310 [2024-06-08 21:22:18.267073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.267077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166e10) on tqpair=0x20fe9e0 00:26:40.310 [2024-06-08 21:22:18.307626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.310 [2024-06-08 21:22:18.307638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.310 [2024-06-08 21:22:18.307641] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.307645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166cb0) on tqpair=0x20fe9e0 00:26:40.310 [2024-06-08 21:22:18.307657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.307661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.307665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.307672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.310 [2024-06-08 21:22:18.307688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166cb0, cid 4, qid 0 00:26:40.310 [2024-06-08 21:22:18.307924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.310 [2024-06-08 21:22:18.307931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.310 [2024-06-08 21:22:18.307934] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.307938] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe9e0): datao=0, datal=3072, cccid=4 00:26:40.310 [2024-06-08 21:22:18.307942] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2166cb0) on tqpair(0x20fe9e0): expected_datao=0, payload_size=3072 00:26:40.310 [2024-06-08 21:22:18.307949] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.307953] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.308094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.310 [2024-06-08 21:22:18.308104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.310 [2024-06-08 21:22:18.308108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.308111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166cb0) on tqpair=0x20fe9e0 00:26:40.310 [2024-06-08 21:22:18.308120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.308124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.308127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20fe9e0) 00:26:40.310 [2024-06-08 21:22:18.308134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.310 [2024-06-08 21:22:18.308148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166cb0, cid 4, qid 0 00:26:40.310 [2024-06-08 21:22:18.308395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.310 [2024-06-08 21:22:18.312406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.310 [2024-06-08 21:22:18.312413] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.310 [2024-06-08 21:22:18.312416] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20fe9e0): datao=0, datal=8, cccid=4 00:26:40.310 [2024-06-08 21:22:18.312421] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2166cb0) on tqpair(0x20fe9e0): expected_datao=0, payload_size=8 00:26:40.311 [2024-06-08 21:22:18.312428] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.311 [2024-06-08 21:22:18.312431] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.311 [2024-06-08 21:22:18.352409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.311 [2024-06-08 21:22:18.352418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.311 [2024-06-08 21:22:18.352422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.311 [2024-06-08 21:22:18.352426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166cb0) on tqpair=0x20fe9e0 00:26:40.311 ===================================================== 00:26:40.311 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:40.311 ===================================================== 00:26:40.311 Controller Capabilities/Features 00:26:40.311 ================================ 00:26:40.311 Vendor ID: 0000 00:26:40.311 Subsystem Vendor ID: 0000 00:26:40.311 Serial Number: .................... 00:26:40.311 Model Number: ........................................ 00:26:40.311 Firmware Version: 24.01.1 00:26:40.311 Recommended Arb Burst: 0 00:26:40.311 IEEE OUI Identifier: 00 00 00 00:26:40.311 Multi-path I/O 00:26:40.311 May have multiple subsystem ports: No 00:26:40.311 May have multiple controllers: No 00:26:40.311 Associated with SR-IOV VF: No 00:26:40.311 Max Data Transfer Size: 131072 00:26:40.311 Max Number of Namespaces: 0 00:26:40.311 Max Number of I/O Queues: 1024 00:26:40.311 NVMe Specification Version (VS): 1.3 00:26:40.311 NVMe Specification Version (Identify): 1.3 00:26:40.311 Maximum Queue Entries: 128 00:26:40.311 Contiguous Queues Required: Yes 00:26:40.311 Arbitration Mechanisms Supported 00:26:40.311 Weighted Round Robin: Not Supported 00:26:40.311 Vendor Specific: Not Supported 00:26:40.311 Reset Timeout: 15000 ms 00:26:40.311 Doorbell Stride: 4 bytes 00:26:40.311 NVM Subsystem Reset: Not Supported 00:26:40.311 Command Sets Supported 00:26:40.311 NVM Command Set: Supported 00:26:40.311 Boot Partition: Not Supported 00:26:40.311 Memory Page Size Minimum: 4096 bytes 00:26:40.311 Memory Page Size Maximum: 4096 bytes 00:26:40.311 Persistent Memory Region: Not Supported 00:26:40.311 Optional Asynchronous Events Supported 00:26:40.311 Namespace Attribute Notices: Not Supported 00:26:40.311 Firmware Activation Notices: Not Supported 00:26:40.311 ANA Change Notices: Not Supported 00:26:40.311 PLE Aggregate Log Change Notices: Not Supported 00:26:40.311 LBA Status Info Alert Notices: Not Supported 00:26:40.311 EGE Aggregate Log Change Notices: Not Supported 00:26:40.311 Normal NVM Subsystem Shutdown event: Not Supported 00:26:40.311 Zone Descriptor Change Notices: Not Supported 00:26:40.311 Discovery Log Change Notices: Supported 00:26:40.311 Controller Attributes 00:26:40.311 128-bit Host Identifier: Not Supported 00:26:40.311 Non-Operational Permissive Mode: Not Supported 00:26:40.311 NVM Sets: Not Supported 00:26:40.311 Read Recovery Levels: Not Supported 00:26:40.311 Endurance Groups: Not Supported 00:26:40.311 Predictable Latency Mode: Not Supported 00:26:40.311 Traffic Based Keep ALive: Not Supported 00:26:40.311 Namespace Granularity: Not Supported 00:26:40.311 SQ Associations: Not Supported 00:26:40.311 UUID List: Not Supported 00:26:40.311 Multi-Domain Subsystem: Not Supported 00:26:40.311 Fixed Capacity Management: Not Supported 00:26:40.311 Variable Capacity Management: Not Supported 00:26:40.311 Delete Endurance Group: Not Supported 00:26:40.311 Delete NVM Set: Not Supported 00:26:40.311 Extended LBA Formats Supported: Not Supported 00:26:40.311 Flexible Data Placement Supported: Not Supported 00:26:40.311 00:26:40.311 Controller Memory Buffer Support 00:26:40.311 ================================ 00:26:40.311 Supported: No 00:26:40.311 00:26:40.311 Persistent Memory Region Support 00:26:40.311 ================================ 00:26:40.311 Supported: No 00:26:40.311 00:26:40.311 Admin Command Set Attributes 00:26:40.311 ============================ 00:26:40.311 Security Send/Receive: Not Supported 00:26:40.311 Format NVM: Not Supported 00:26:40.311 Firmware Activate/Download: Not Supported 00:26:40.311 Namespace Management: Not Supported 00:26:40.311 Device Self-Test: Not Supported 00:26:40.311 Directives: Not Supported 00:26:40.311 NVMe-MI: Not Supported 00:26:40.311 Virtualization Management: Not Supported 00:26:40.311 Doorbell Buffer Config: Not Supported 00:26:40.311 Get LBA Status Capability: Not Supported 00:26:40.311 Command & Feature Lockdown Capability: Not Supported 00:26:40.311 Abort Command Limit: 1 00:26:40.311 Async Event Request Limit: 4 00:26:40.311 Number of Firmware Slots: N/A 00:26:40.311 Firmware Slot 1 Read-Only: N/A 00:26:40.311 Firmware Activation Without Reset: N/A 00:26:40.311 Multiple Update Detection Support: N/A 00:26:40.311 Firmware Update Granularity: No Information Provided 00:26:40.311 Per-Namespace SMART Log: No 00:26:40.311 Asymmetric Namespace Access Log Page: Not Supported 00:26:40.311 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:40.311 Command Effects Log Page: Not Supported 00:26:40.311 Get Log Page Extended Data: Supported 00:26:40.311 Telemetry Log Pages: Not Supported 00:26:40.311 Persistent Event Log Pages: Not Supported 00:26:40.311 Supported Log Pages Log Page: May Support 00:26:40.311 Commands Supported & Effects Log Page: Not Supported 00:26:40.311 Feature Identifiers & Effects Log Page:May Support 00:26:40.311 NVMe-MI Commands & Effects Log Page: May Support 00:26:40.311 Data Area 4 for Telemetry Log: Not Supported 00:26:40.311 Error Log Page Entries Supported: 128 00:26:40.311 Keep Alive: Not Supported 00:26:40.311 00:26:40.311 NVM Command Set Attributes 00:26:40.311 ========================== 00:26:40.311 Submission Queue Entry Size 00:26:40.311 Max: 1 00:26:40.311 Min: 1 00:26:40.311 Completion Queue Entry Size 00:26:40.311 Max: 1 00:26:40.311 Min: 1 00:26:40.311 Number of Namespaces: 0 00:26:40.311 Compare Command: Not Supported 00:26:40.311 Write Uncorrectable Command: Not Supported 00:26:40.311 Dataset Management Command: Not Supported 00:26:40.311 Write Zeroes Command: Not Supported 00:26:40.311 Set Features Save Field: Not Supported 00:26:40.311 Reservations: Not Supported 00:26:40.311 Timestamp: Not Supported 00:26:40.311 Copy: Not Supported 00:26:40.311 Volatile Write Cache: Not Present 00:26:40.311 Atomic Write Unit (Normal): 1 00:26:40.311 Atomic Write Unit (PFail): 1 00:26:40.311 Atomic Compare & Write Unit: 1 00:26:40.311 Fused Compare & Write: Supported 00:26:40.311 Scatter-Gather List 00:26:40.311 SGL Command Set: Supported 00:26:40.311 SGL Keyed: Supported 00:26:40.311 SGL Bit Bucket Descriptor: Not Supported 00:26:40.311 SGL Metadata Pointer: Not Supported 00:26:40.311 Oversized SGL: Not Supported 00:26:40.311 SGL Metadata Address: Not Supported 00:26:40.311 SGL Offset: Supported 00:26:40.311 Transport SGL Data Block: Not Supported 00:26:40.311 Replay Protected Memory Block: Not Supported 00:26:40.311 00:26:40.311 Firmware Slot Information 00:26:40.311 ========================= 00:26:40.311 Active slot: 0 00:26:40.311 00:26:40.311 00:26:40.311 Error Log 00:26:40.311 ========= 00:26:40.311 00:26:40.311 Active Namespaces 00:26:40.311 ================= 00:26:40.311 Discovery Log Page 00:26:40.311 ================== 00:26:40.311 Generation Counter: 2 00:26:40.311 Number of Records: 2 00:26:40.311 Record Format: 0 00:26:40.311 00:26:40.311 Discovery Log Entry 0 00:26:40.311 ---------------------- 00:26:40.311 Transport Type: 3 (TCP) 00:26:40.311 Address Family: 1 (IPv4) 00:26:40.311 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:40.311 Entry Flags: 00:26:40.311 Duplicate Returned Information: 1 00:26:40.311 Explicit Persistent Connection Support for Discovery: 1 00:26:40.311 Transport Requirements: 00:26:40.311 Secure Channel: Not Required 00:26:40.311 Port ID: 0 (0x0000) 00:26:40.311 Controller ID: 65535 (0xffff) 00:26:40.311 Admin Max SQ Size: 128 00:26:40.311 Transport Service Identifier: 4420 00:26:40.311 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:40.312 Transport Address: 10.0.0.2 00:26:40.312 Discovery Log Entry 1 00:26:40.312 ---------------------- 00:26:40.312 Transport Type: 3 (TCP) 00:26:40.312 Address Family: 1 (IPv4) 00:26:40.312 Subsystem Type: 2 (NVM Subsystem) 00:26:40.312 Entry Flags: 00:26:40.312 Duplicate Returned Information: 0 00:26:40.312 Explicit Persistent Connection Support for Discovery: 0 00:26:40.312 Transport Requirements: 00:26:40.312 Secure Channel: Not Required 00:26:40.312 Port ID: 0 (0x0000) 00:26:40.312 Controller ID: 65535 (0xffff) 00:26:40.312 Admin Max SQ Size: 128 00:26:40.312 Transport Service Identifier: 4420 00:26:40.312 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:40.312 Transport Address: 10.0.0.2 [2024-06-08 21:22:18.352514] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:40.312 [2024-06-08 21:22:18.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.312 [2024-06-08 21:22:18.352533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.312 [2024-06-08 21:22:18.352539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.312 [2024-06-08 21:22:18.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.312 [2024-06-08 21:22:18.352555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.352559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.352562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.352569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.352582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.352874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.352881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.352884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.352888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.352895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.352899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.352905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.352912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.352926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.353184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.353191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.353194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.353203] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:40.312 [2024-06-08 21:22:18.353208] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:40.312 [2024-06-08 21:22:18.353217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.353231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.353240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.353453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.353460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.353463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.353477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.353491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.353501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.353701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.353707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.353711] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.353724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.353731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.353738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.353748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.353995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.354001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.354005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.354021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354025] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.354035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.354045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.354251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.354258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.354261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.354274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.354288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.354298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.354550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.354556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.354560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.354573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.312 [2024-06-08 21:22:18.354587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.312 [2024-06-08 21:22:18.354597] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.312 [2024-06-08 21:22:18.354794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.312 [2024-06-08 21:22:18.354800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.312 [2024-06-08 21:22:18.354803] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.312 [2024-06-08 21:22:18.354807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.312 [2024-06-08 21:22:18.354817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.354820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.354824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.313 [2024-06-08 21:22:18.354831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.313 [2024-06-08 21:22:18.354840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.313 [2024-06-08 21:22:18.355058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.313 [2024-06-08 21:22:18.355065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.313 [2024-06-08 21:22:18.355068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.313 [2024-06-08 21:22:18.355084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.313 [2024-06-08 21:22:18.355098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.313 [2024-06-08 21:22:18.355107] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.313 [2024-06-08 21:22:18.355349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.313 [2024-06-08 21:22:18.355355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.313 [2024-06-08 21:22:18.355358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.313 [2024-06-08 21:22:18.355372] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355375] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.313 [2024-06-08 21:22:18.355386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.313 [2024-06-08 21:22:18.355395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.313 [2024-06-08 21:22:18.355639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.313 [2024-06-08 21:22:18.355646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.313 [2024-06-08 21:22:18.355649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.313 [2024-06-08 21:22:18.355663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.313 [2024-06-08 21:22:18.355677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.313 [2024-06-08 21:22:18.355687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.313 [2024-06-08 21:22:18.355891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.313 [2024-06-08 21:22:18.355897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.313 [2024-06-08 21:22:18.355900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.313 [2024-06-08 21:22:18.355904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.314 [2024-06-08 21:22:18.355914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.314 [2024-06-08 21:22:18.355918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.314 [2024-06-08 21:22:18.355921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.314 [2024-06-08 21:22:18.355928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.314 [2024-06-08 21:22:18.355938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.314 [2024-06-08 21:22:18.356188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.314 [2024-06-08 21:22:18.356194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.314 [2024-06-08 21:22:18.356197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.314 [2024-06-08 21:22:18.356201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.314 [2024-06-08 21:22:18.356211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.314 [2024-06-08 21:22:18.356219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.314 [2024-06-08 21:22:18.356223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.314 [2024-06-08 21:22:18.356230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.314 [2024-06-08 21:22:18.356239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.314 [2024-06-08 21:22:18.360409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.314 [2024-06-08 21:22:18.360417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.314 [2024-06-08 21:22:18.360421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.315 [2024-06-08 21:22:18.360424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.315 [2024-06-08 21:22:18.360435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.315 [2024-06-08 21:22:18.360439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.315 [2024-06-08 21:22:18.360442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20fe9e0) 00:26:40.315 [2024-06-08 21:22:18.360449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.315 [2024-06-08 21:22:18.360460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2166b50, cid 3, qid 0 00:26:40.315 [2024-06-08 21:22:18.360687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.315 [2024-06-08 21:22:18.360693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.315 [2024-06-08 21:22:18.360696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.315 [2024-06-08 21:22:18.360700] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2166b50) on tqpair=0x20fe9e0 00:26:40.315 [2024-06-08 21:22:18.360708] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:40.315 00:26:40.315 21:22:18 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:40.581 [2024-06-08 21:22:18.398093] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.581 [2024-06-08 21:22:18.398160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507155 ] 00:26:40.581 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.581 [2024-06-08 21:22:18.431957] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:40.581 [2024-06-08 21:22:18.432001] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:40.581 [2024-06-08 21:22:18.432006] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:40.581 [2024-06-08 21:22:18.432016] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:40.581 [2024-06-08 21:22:18.432023] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:40.581 [2024-06-08 21:22:18.435428] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:40.581 [2024-06-08 21:22:18.435452] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x50f9e0 0 00:26:40.581 [2024-06-08 21:22:18.443410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:40.581 [2024-06-08 21:22:18.443422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:40.581 [2024-06-08 21:22:18.443426] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:40.581 [2024-06-08 21:22:18.443433] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:40.581 [2024-06-08 21:22:18.443465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.443471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.443475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.443487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:40.581 [2024-06-08 21:22:18.443503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.450411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.450420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.450424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.450437] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:40.581 [2024-06-08 21:22:18.450442] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:40.581 [2024-06-08 21:22:18.450447] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:40.581 [2024-06-08 21:22:18.450461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.450476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.581 [2024-06-08 21:22:18.450488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.450590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.450597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.450600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.450612] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:40.581 [2024-06-08 21:22:18.450619] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:40.581 [2024-06-08 21:22:18.450626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.450640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.581 [2024-06-08 21:22:18.450651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.450756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.450762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.450765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.450774] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:40.581 [2024-06-08 21:22:18.450782] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.450792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.450806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.581 [2024-06-08 21:22:18.450816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.450921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.450927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.450930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.450939] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.450948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.450955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.450962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.581 [2024-06-08 21:22:18.450972] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.451063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.451069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.451072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.451076] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.451080] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:40.581 [2024-06-08 21:22:18.451085] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.451092] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.451198] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:40.581 [2024-06-08 21:22:18.451202] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.451209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.451213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.451216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.581 [2024-06-08 21:22:18.451223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.581 [2024-06-08 21:22:18.451233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.581 [2024-06-08 21:22:18.451326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.581 [2024-06-08 21:22:18.451332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.581 [2024-06-08 21:22:18.451335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.581 [2024-06-08 21:22:18.451339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.581 [2024-06-08 21:22:18.451344] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:40.581 [2024-06-08 21:22:18.451356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.451370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.582 [2024-06-08 21:22:18.451380] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.582 [2024-06-08 21:22:18.451479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.582 [2024-06-08 21:22:18.451486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.582 [2024-06-08 21:22:18.451489] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.582 [2024-06-08 21:22:18.451498] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:40.582 [2024-06-08 21:22:18.451502] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.451510] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:40.582 [2024-06-08 21:22:18.451522] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.451530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.451544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.582 [2024-06-08 21:22:18.451555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.582 [2024-06-08 21:22:18.451678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.582 [2024-06-08 21:22:18.451684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.582 [2024-06-08 21:22:18.451688] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451691] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=4096, cccid=0 00:26:40.582 [2024-06-08 21:22:18.451696] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577730) on tqpair(0x50f9e0): expected_datao=0, payload_size=4096 00:26:40.582 [2024-06-08 21:22:18.451758] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451763] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.582 [2024-06-08 21:22:18.451946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.582 [2024-06-08 21:22:18.451949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.451953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.582 [2024-06-08 21:22:18.451960] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:40.582 [2024-06-08 21:22:18.451968] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:40.582 [2024-06-08 21:22:18.451972] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:40.582 [2024-06-08 21:22:18.451976] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:40.582 [2024-06-08 21:22:18.451982] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:40.582 [2024-06-08 21:22:18.451987] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.451995] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:40.582 [2024-06-08 21:22:18.452027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.582 [2024-06-08 21:22:18.452121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.582 [2024-06-08 21:22:18.452127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.582 [2024-06-08 21:22:18.452131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577730) on tqpair=0x50f9e0 00:26:40.582 [2024-06-08 21:22:18.452141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.582 [2024-06-08 21:22:18.452161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.582 [2024-06-08 21:22:18.452179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.582 [2024-06-08 21:22:18.452198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.582 [2024-06-08 21:22:18.452215] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452225] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.582 [2024-06-08 21:22:18.452258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577730, cid 0, qid 0 00:26:40.582 [2024-06-08 21:22:18.452263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577890, cid 1, qid 0 00:26:40.582 [2024-06-08 21:22:18.452268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5779f0, cid 2, qid 0 00:26:40.582 [2024-06-08 21:22:18.452273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.582 [2024-06-08 21:22:18.452277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.582 [2024-06-08 21:22:18.452408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.582 [2024-06-08 21:22:18.452415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.582 [2024-06-08 21:22:18.452418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.582 [2024-06-08 21:22:18.452427] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:40.582 [2024-06-08 21:22:18.452432] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452440] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452445] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:40.582 [2024-06-08 21:22:18.452451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.582 [2024-06-08 21:22:18.452458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.582 [2024-06-08 21:22:18.452465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:40.582 [2024-06-08 21:22:18.452475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.583 [2024-06-08 21:22:18.452573] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.452579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.452582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.452586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.452637] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.452645] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.452653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.452656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.452660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.583 [2024-06-08 21:22:18.452667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.583 [2024-06-08 21:22:18.452677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.583 [2024-06-08 21:22:18.452782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.583 [2024-06-08 21:22:18.452788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.583 [2024-06-08 21:22:18.452792] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.452798] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=4096, cccid=4 00:26:40.583 [2024-06-08 21:22:18.452802] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577cb0) on tqpair(0x50f9e0): expected_datao=0, payload_size=4096 00:26:40.583 [2024-06-08 21:22:18.452932] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.452936] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.496421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.496424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.496439] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:40.583 [2024-06-08 21:22:18.496452] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.496461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.496467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.583 [2024-06-08 21:22:18.496482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.583 [2024-06-08 21:22:18.496494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.583 [2024-06-08 21:22:18.496595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.583 [2024-06-08 21:22:18.496602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.583 [2024-06-08 21:22:18.496605] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496609] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=4096, cccid=4 00:26:40.583 [2024-06-08 21:22:18.496613] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577cb0) on tqpair(0x50f9e0): expected_datao=0, payload_size=4096 00:26:40.583 [2024-06-08 21:22:18.496754] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496758] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.496869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.496873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.496889] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.496898] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.496905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.496912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.583 [2024-06-08 21:22:18.496919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.583 [2024-06-08 21:22:18.496931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.583 [2024-06-08 21:22:18.497030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.583 [2024-06-08 21:22:18.497037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.583 [2024-06-08 21:22:18.497040] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497044] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=4096, cccid=4 00:26:40.583 [2024-06-08 21:22:18.497048] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577cb0) on tqpair(0x50f9e0): expected_datao=0, payload_size=4096 00:26:40.583 [2024-06-08 21:22:18.497105] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497109] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.497221] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.497224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.497235] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497242] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497251] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497256] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497261] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497266] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:40.583 [2024-06-08 21:22:18.497271] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:40.583 [2024-06-08 21:22:18.497276] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:40.583 [2024-06-08 21:22:18.497290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.583 [2024-06-08 21:22:18.497303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.583 [2024-06-08 21:22:18.497310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x50f9e0) 00:26:40.583 [2024-06-08 21:22:18.497323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.583 [2024-06-08 21:22:18.497336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.583 [2024-06-08 21:22:18.497341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577e10, cid 5, qid 0 00:26:40.583 [2024-06-08 21:22:18.497453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.497460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.497463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.497477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.583 [2024-06-08 21:22:18.497483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.583 [2024-06-08 21:22:18.497486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577e10) on tqpair=0x50f9e0 00:26:40.583 [2024-06-08 21:22:18.497498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.583 [2024-06-08 21:22:18.497502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.497512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.497522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577e10, cid 5, qid 0 00:26:40.584 [2024-06-08 21:22:18.497654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.497660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.497664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577e10) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.497676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.497689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.497699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577e10, cid 5, qid 0 00:26:40.584 [2024-06-08 21:22:18.497802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.497808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.497811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577e10) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.497823] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497827] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.497837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.497846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577e10, cid 5, qid 0 00:26:40.584 [2024-06-08 21:22:18.497952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.497958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.497962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577e10) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.497977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.497984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.497990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.497997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.498013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.498020] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.498033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.498040] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x50f9e0) 00:26:40.584 [2024-06-08 21:22:18.498053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.584 [2024-06-08 21:22:18.498064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577e10, cid 5, qid 0 00:26:40.584 [2024-06-08 21:22:18.498069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577cb0, cid 4, qid 0 00:26:40.584 [2024-06-08 21:22:18.498073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577f70, cid 6, qid 0 00:26:40.584 [2024-06-08 21:22:18.498078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5780d0, cid 7, qid 0 00:26:40.584 [2024-06-08 21:22:18.498227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.584 [2024-06-08 21:22:18.498234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.584 [2024-06-08 21:22:18.498237] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498241] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=8192, cccid=5 00:26:40.584 [2024-06-08 21:22:18.498245] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577e10) on tqpair(0x50f9e0): expected_datao=0, payload_size=8192 00:26:40.584 [2024-06-08 21:22:18.498326] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498331] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.584 [2024-06-08 21:22:18.498342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.584 [2024-06-08 21:22:18.498345] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498349] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=512, cccid=4 00:26:40.584 [2024-06-08 21:22:18.498353] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577cb0) on tqpair(0x50f9e0): expected_datao=0, payload_size=512 00:26:40.584 [2024-06-08 21:22:18.498360] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498363] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.584 [2024-06-08 21:22:18.498374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.584 [2024-06-08 21:22:18.498378] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498381] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=512, cccid=6 00:26:40.584 [2024-06-08 21:22:18.498385] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x577f70) on tqpair(0x50f9e0): expected_datao=0, payload_size=512 00:26:40.584 [2024-06-08 21:22:18.498392] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498398] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:40.584 [2024-06-08 21:22:18.498416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:40.584 [2024-06-08 21:22:18.498419] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498422] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x50f9e0): datao=0, datal=4096, cccid=7 00:26:40.584 [2024-06-08 21:22:18.498426] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5780d0) on tqpair(0x50f9e0): expected_datao=0, payload_size=4096 00:26:40.584 [2024-06-08 21:22:18.498498] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.498502] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.544413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.544422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.544426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.544429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577e10) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.544445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.544451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.544454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.544457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577cb0) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.544466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.544472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.544475] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.544479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577f70) on tqpair=0x50f9e0 00:26:40.584 [2024-06-08 21:22:18.544486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.584 [2024-06-08 21:22:18.544491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.584 [2024-06-08 21:22:18.544495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.584 [2024-06-08 21:22:18.544498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5780d0) on tqpair=0x50f9e0 00:26:40.584 ===================================================== 00:26:40.584 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.584 ===================================================== 00:26:40.584 Controller Capabilities/Features 00:26:40.584 ================================ 00:26:40.584 Vendor ID: 8086 00:26:40.584 Subsystem Vendor ID: 8086 00:26:40.585 Serial Number: SPDK00000000000001 00:26:40.585 Model Number: SPDK bdev Controller 00:26:40.585 Firmware Version: 24.01.1 00:26:40.585 Recommended Arb Burst: 6 00:26:40.585 IEEE OUI Identifier: e4 d2 5c 00:26:40.585 Multi-path I/O 00:26:40.585 May have multiple subsystem ports: Yes 00:26:40.585 May have multiple controllers: Yes 00:26:40.585 Associated with SR-IOV VF: No 00:26:40.585 Max Data Transfer Size: 131072 00:26:40.585 Max Number of Namespaces: 32 00:26:40.585 Max Number of I/O Queues: 127 00:26:40.585 NVMe Specification Version (VS): 1.3 00:26:40.585 NVMe Specification Version (Identify): 1.3 00:26:40.585 Maximum Queue Entries: 128 00:26:40.585 Contiguous Queues Required: Yes 00:26:40.585 Arbitration Mechanisms Supported 00:26:40.585 Weighted Round Robin: Not Supported 00:26:40.585 Vendor Specific: Not Supported 00:26:40.585 Reset Timeout: 15000 ms 00:26:40.585 Doorbell Stride: 4 bytes 00:26:40.585 NVM Subsystem Reset: Not Supported 00:26:40.585 Command Sets Supported 00:26:40.585 NVM Command Set: Supported 00:26:40.585 Boot Partition: Not Supported 00:26:40.585 Memory Page Size Minimum: 4096 bytes 00:26:40.585 Memory Page Size Maximum: 4096 bytes 00:26:40.585 Persistent Memory Region: Not Supported 00:26:40.585 Optional Asynchronous Events Supported 00:26:40.585 Namespace Attribute Notices: Supported 00:26:40.585 Firmware Activation Notices: Not Supported 00:26:40.585 ANA Change Notices: Not Supported 00:26:40.585 PLE Aggregate Log Change Notices: Not Supported 00:26:40.585 LBA Status Info Alert Notices: Not Supported 00:26:40.585 EGE Aggregate Log Change Notices: Not Supported 00:26:40.585 Normal NVM Subsystem Shutdown event: Not Supported 00:26:40.585 Zone Descriptor Change Notices: Not Supported 00:26:40.585 Discovery Log Change Notices: Not Supported 00:26:40.585 Controller Attributes 00:26:40.585 128-bit Host Identifier: Supported 00:26:40.585 Non-Operational Permissive Mode: Not Supported 00:26:40.585 NVM Sets: Not Supported 00:26:40.585 Read Recovery Levels: Not Supported 00:26:40.585 Endurance Groups: Not Supported 00:26:40.585 Predictable Latency Mode: Not Supported 00:26:40.585 Traffic Based Keep ALive: Not Supported 00:26:40.585 Namespace Granularity: Not Supported 00:26:40.585 SQ Associations: Not Supported 00:26:40.585 UUID List: Not Supported 00:26:40.585 Multi-Domain Subsystem: Not Supported 00:26:40.585 Fixed Capacity Management: Not Supported 00:26:40.585 Variable Capacity Management: Not Supported 00:26:40.585 Delete Endurance Group: Not Supported 00:26:40.585 Delete NVM Set: Not Supported 00:26:40.585 Extended LBA Formats Supported: Not Supported 00:26:40.585 Flexible Data Placement Supported: Not Supported 00:26:40.585 00:26:40.585 Controller Memory Buffer Support 00:26:40.585 ================================ 00:26:40.585 Supported: No 00:26:40.585 00:26:40.585 Persistent Memory Region Support 00:26:40.585 ================================ 00:26:40.585 Supported: No 00:26:40.585 00:26:40.585 Admin Command Set Attributes 00:26:40.585 ============================ 00:26:40.585 Security Send/Receive: Not Supported 00:26:40.585 Format NVM: Not Supported 00:26:40.585 Firmware Activate/Download: Not Supported 00:26:40.585 Namespace Management: Not Supported 00:26:40.585 Device Self-Test: Not Supported 00:26:40.585 Directives: Not Supported 00:26:40.585 NVMe-MI: Not Supported 00:26:40.585 Virtualization Management: Not Supported 00:26:40.585 Doorbell Buffer Config: Not Supported 00:26:40.585 Get LBA Status Capability: Not Supported 00:26:40.585 Command & Feature Lockdown Capability: Not Supported 00:26:40.585 Abort Command Limit: 4 00:26:40.585 Async Event Request Limit: 4 00:26:40.585 Number of Firmware Slots: N/A 00:26:40.585 Firmware Slot 1 Read-Only: N/A 00:26:40.585 Firmware Activation Without Reset: N/A 00:26:40.585 Multiple Update Detection Support: N/A 00:26:40.585 Firmware Update Granularity: No Information Provided 00:26:40.585 Per-Namespace SMART Log: No 00:26:40.585 Asymmetric Namespace Access Log Page: Not Supported 00:26:40.585 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:40.585 Command Effects Log Page: Supported 00:26:40.585 Get Log Page Extended Data: Supported 00:26:40.585 Telemetry Log Pages: Not Supported 00:26:40.585 Persistent Event Log Pages: Not Supported 00:26:40.585 Supported Log Pages Log Page: May Support 00:26:40.585 Commands Supported & Effects Log Page: Not Supported 00:26:40.585 Feature Identifiers & Effects Log Page:May Support 00:26:40.585 NVMe-MI Commands & Effects Log Page: May Support 00:26:40.585 Data Area 4 for Telemetry Log: Not Supported 00:26:40.585 Error Log Page Entries Supported: 128 00:26:40.585 Keep Alive: Supported 00:26:40.585 Keep Alive Granularity: 10000 ms 00:26:40.585 00:26:40.585 NVM Command Set Attributes 00:26:40.585 ========================== 00:26:40.585 Submission Queue Entry Size 00:26:40.585 Max: 64 00:26:40.585 Min: 64 00:26:40.585 Completion Queue Entry Size 00:26:40.585 Max: 16 00:26:40.585 Min: 16 00:26:40.585 Number of Namespaces: 32 00:26:40.585 Compare Command: Supported 00:26:40.585 Write Uncorrectable Command: Not Supported 00:26:40.585 Dataset Management Command: Supported 00:26:40.585 Write Zeroes Command: Supported 00:26:40.585 Set Features Save Field: Not Supported 00:26:40.585 Reservations: Supported 00:26:40.585 Timestamp: Not Supported 00:26:40.585 Copy: Supported 00:26:40.585 Volatile Write Cache: Present 00:26:40.585 Atomic Write Unit (Normal): 1 00:26:40.585 Atomic Write Unit (PFail): 1 00:26:40.585 Atomic Compare & Write Unit: 1 00:26:40.585 Fused Compare & Write: Supported 00:26:40.585 Scatter-Gather List 00:26:40.585 SGL Command Set: Supported 00:26:40.585 SGL Keyed: Supported 00:26:40.585 SGL Bit Bucket Descriptor: Not Supported 00:26:40.585 SGL Metadata Pointer: Not Supported 00:26:40.585 Oversized SGL: Not Supported 00:26:40.585 SGL Metadata Address: Not Supported 00:26:40.585 SGL Offset: Supported 00:26:40.585 Transport SGL Data Block: Not Supported 00:26:40.585 Replay Protected Memory Block: Not Supported 00:26:40.585 00:26:40.585 Firmware Slot Information 00:26:40.585 ========================= 00:26:40.585 Active slot: 1 00:26:40.585 Slot 1 Firmware Revision: 24.01.1 00:26:40.585 00:26:40.585 00:26:40.585 Commands Supported and Effects 00:26:40.585 ============================== 00:26:40.585 Admin Commands 00:26:40.585 -------------- 00:26:40.585 Get Log Page (02h): Supported 00:26:40.585 Identify (06h): Supported 00:26:40.585 Abort (08h): Supported 00:26:40.585 Set Features (09h): Supported 00:26:40.585 Get Features (0Ah): Supported 00:26:40.585 Asynchronous Event Request (0Ch): Supported 00:26:40.585 Keep Alive (18h): Supported 00:26:40.585 I/O Commands 00:26:40.585 ------------ 00:26:40.585 Flush (00h): Supported LBA-Change 00:26:40.585 Write (01h): Supported LBA-Change 00:26:40.585 Read (02h): Supported 00:26:40.585 Compare (05h): Supported 00:26:40.585 Write Zeroes (08h): Supported LBA-Change 00:26:40.585 Dataset Management (09h): Supported LBA-Change 00:26:40.585 Copy (19h): Supported LBA-Change 00:26:40.585 Unknown (79h): Supported LBA-Change 00:26:40.585 Unknown (7Ah): Supported 00:26:40.585 00:26:40.585 Error Log 00:26:40.585 ========= 00:26:40.585 00:26:40.585 Arbitration 00:26:40.585 =========== 00:26:40.585 Arbitration Burst: 1 00:26:40.585 00:26:40.585 Power Management 00:26:40.585 ================ 00:26:40.585 Number of Power States: 1 00:26:40.585 Current Power State: Power State #0 00:26:40.585 Power State #0: 00:26:40.585 Max Power: 0.00 W 00:26:40.585 Non-Operational State: Operational 00:26:40.585 Entry Latency: Not Reported 00:26:40.585 Exit Latency: Not Reported 00:26:40.585 Relative Read Throughput: 0 00:26:40.585 Relative Read Latency: 0 00:26:40.585 Relative Write Throughput: 0 00:26:40.585 Relative Write Latency: 0 00:26:40.585 Idle Power: Not Reported 00:26:40.586 Active Power: Not Reported 00:26:40.586 Non-Operational Permissive Mode: Not Supported 00:26:40.586 00:26:40.586 Health Information 00:26:40.586 ================== 00:26:40.586 Critical Warnings: 00:26:40.586 Available Spare Space: OK 00:26:40.586 Temperature: OK 00:26:40.586 Device Reliability: OK 00:26:40.586 Read Only: No 00:26:40.586 Volatile Memory Backup: OK 00:26:40.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:40.586 Temperature Threshold: [2024-06-08 21:22:18.544603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.544620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.544632] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5780d0, cid 7, qid 0 00:26:40.586 [2024-06-08 21:22:18.544744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.544751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.544754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544758] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5780d0) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.544790] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:40.586 [2024-06-08 21:22:18.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.586 [2024-06-08 21:22:18.544808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.586 [2024-06-08 21:22:18.544813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.586 [2024-06-08 21:22:18.544821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.586 [2024-06-08 21:22:18.544829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.544843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.544855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.544951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.544957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.544961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.544971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.544978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.544985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.544998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.545140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.545146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.545149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.545158] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:40.586 [2024-06-08 21:22:18.545162] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:40.586 [2024-06-08 21:22:18.545171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.545185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.545194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.545291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.545297] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.545300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.545314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.545327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.545337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.545449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.545456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.545459] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.545472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.545486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.545496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.545588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.545594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.545597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.586 [2024-06-08 21:22:18.545610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.586 [2024-06-08 21:22:18.545618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.586 [2024-06-08 21:22:18.545624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.586 [2024-06-08 21:22:18.545634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.586 [2024-06-08 21:22:18.545748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.586 [2024-06-08 21:22:18.545754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.586 [2024-06-08 21:22:18.545758] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.545771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.545785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.545794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.545897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.545903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.545906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.545919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.545926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.545933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.545943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546063] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546096] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546216] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546220] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546396] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546648] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.546912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.546918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.546921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.546934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.546942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.546948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.546958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.547072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.547078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.547082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.547095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.547108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.547118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.547223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.547229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.547232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547236] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.547245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.587 [2024-06-08 21:22:18.547259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.587 [2024-06-08 21:22:18.547268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.587 [2024-06-08 21:22:18.547374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.587 [2024-06-08 21:22:18.547380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.587 [2024-06-08 21:22:18.547383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.587 [2024-06-08 21:22:18.547398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.587 [2024-06-08 21:22:18.547407] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.547417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.547427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.547522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.547528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.547531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.547544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.547558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.547568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.547677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.547683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.547686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.547699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.547713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.547722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.547827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.547834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.547837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.547850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.547864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.547873] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.547979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.547985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.547988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.547992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.548003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548007] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548011] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.548017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.548027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.548120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.548126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.548130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.548143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.548157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.548166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.548281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.548287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.548290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.548303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.548310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.548317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.548326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.552409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.552419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.552422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.552426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.552437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.552440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.552444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x50f9e0) 00:26:40.588 [2024-06-08 21:22:18.552451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.588 [2024-06-08 21:22:18.552462] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x577b50, cid 3, qid 0 00:26:40.588 [2024-06-08 21:22:18.552586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:40.588 [2024-06-08 21:22:18.552592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:40.588 [2024-06-08 21:22:18.552595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:40.588 [2024-06-08 21:22:18.552599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x577b50) on tqpair=0x50f9e0 00:26:40.588 [2024-06-08 21:22:18.552606] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:40.588 0 Kelvin (-273 Celsius) 00:26:40.588 Available Spare: 0% 00:26:40.588 Available Spare Threshold: 0% 00:26:40.588 Life Percentage Used: 0% 00:26:40.588 Data Units Read: 0 00:26:40.588 Data Units Written: 0 00:26:40.588 Host Read Commands: 0 00:26:40.588 Host Write Commands: 0 00:26:40.588 Controller Busy Time: 0 minutes 00:26:40.588 Power Cycles: 0 00:26:40.588 Power On Hours: 0 hours 00:26:40.588 Unsafe Shutdowns: 0 00:26:40.588 Unrecoverable Media Errors: 0 00:26:40.588 Lifetime Error Log Entries: 0 00:26:40.588 Warning Temperature Time: 0 minutes 00:26:40.588 Critical Temperature Time: 0 minutes 00:26:40.588 00:26:40.588 Number of Queues 00:26:40.588 ================ 00:26:40.588 Number of I/O Submission Queues: 127 00:26:40.588 Number of I/O Completion Queues: 127 00:26:40.588 00:26:40.588 Active Namespaces 00:26:40.588 ================= 00:26:40.588 Namespace ID:1 00:26:40.588 Error Recovery Timeout: Unlimited 00:26:40.588 Command Set Identifier: NVM (00h) 00:26:40.588 Deallocate: Supported 00:26:40.588 Deallocated/Unwritten Error: Not Supported 00:26:40.588 Deallocated Read Value: Unknown 00:26:40.588 Deallocate in Write Zeroes: Not Supported 00:26:40.588 Deallocated Guard Field: 0xFFFF 00:26:40.588 Flush: Supported 00:26:40.588 Reservation: Supported 00:26:40.588 Namespace Sharing Capabilities: Multiple Controllers 00:26:40.588 Size (in LBAs): 131072 (0GiB) 00:26:40.588 Capacity (in LBAs): 131072 (0GiB) 00:26:40.588 Utilization (in LBAs): 131072 (0GiB) 00:26:40.588 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:40.588 EUI64: ABCDEF0123456789 00:26:40.588 UUID: 3788c50e-b34f-4d31-8411-925a08b61788 00:26:40.588 Thin Provisioning: Not Supported 00:26:40.588 Per-NS Atomic Units: Yes 00:26:40.588 Atomic Boundary Size (Normal): 0 00:26:40.588 Atomic Boundary Size (PFail): 0 00:26:40.589 Atomic Boundary Offset: 0 00:26:40.589 Maximum Single Source Range Length: 65535 00:26:40.589 Maximum Copy Length: 65535 00:26:40.589 Maximum Source Range Count: 1 00:26:40.589 NGUID/EUI64 Never Reused: No 00:26:40.589 Namespace Write Protected: No 00:26:40.589 Number of LBA Formats: 1 00:26:40.589 Current LBA Format: LBA Format #00 00:26:40.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:40.589 00:26:40.589 21:22:18 -- host/identify.sh@51 -- # sync 00:26:40.589 21:22:18 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.589 21:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.589 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:26:40.589 21:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.589 21:22:18 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:40.589 21:22:18 -- host/identify.sh@56 -- # nvmftestfini 00:26:40.589 21:22:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:40.589 21:22:18 -- nvmf/common.sh@116 -- # sync 00:26:40.589 21:22:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:40.589 21:22:18 -- nvmf/common.sh@119 -- # set +e 00:26:40.589 21:22:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:40.589 21:22:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:40.589 rmmod nvme_tcp 00:26:40.589 rmmod nvme_fabrics 00:26:40.589 rmmod nvme_keyring 00:26:40.589 21:22:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:40.589 21:22:18 -- nvmf/common.sh@123 -- # set -e 00:26:40.589 21:22:18 -- nvmf/common.sh@124 -- # return 0 00:26:40.589 21:22:18 -- nvmf/common.sh@477 -- # '[' -n 2506901 ']' 00:26:40.589 21:22:18 -- nvmf/common.sh@478 -- # killprocess 2506901 00:26:40.589 21:22:18 -- common/autotest_common.sh@926 -- # '[' -z 2506901 ']' 00:26:40.589 21:22:18 -- common/autotest_common.sh@930 -- # kill -0 2506901 00:26:40.589 21:22:18 -- common/autotest_common.sh@931 -- # uname 00:26:40.589 21:22:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:40.589 21:22:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2506901 00:26:40.850 21:22:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:40.850 21:22:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:40.850 21:22:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2506901' 00:26:40.850 killing process with pid 2506901 00:26:40.850 21:22:18 -- common/autotest_common.sh@945 -- # kill 2506901 00:26:40.850 [2024-06-08 21:22:18.697352] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:40.850 21:22:18 -- common/autotest_common.sh@950 -- # wait 2506901 00:26:40.850 21:22:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:40.850 21:22:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:40.850 21:22:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:40.850 21:22:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.850 21:22:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:40.850 21:22:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.850 21:22:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.850 21:22:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.397 21:22:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:43.397 00:26:43.397 real 0m10.815s 00:26:43.397 user 0m7.916s 00:26:43.397 sys 0m5.520s 00:26:43.397 21:22:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.397 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:26:43.397 ************************************ 00:26:43.397 END TEST nvmf_identify 00:26:43.397 ************************************ 00:26:43.397 21:22:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:43.397 21:22:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:43.397 21:22:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:43.397 21:22:20 -- common/autotest_common.sh@10 -- # set +x 00:26:43.397 ************************************ 00:26:43.397 START TEST nvmf_perf 00:26:43.397 ************************************ 00:26:43.397 21:22:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:43.397 * Looking for test storage... 00:26:43.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.397 21:22:21 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.397 21:22:21 -- nvmf/common.sh@7 -- # uname -s 00:26:43.397 21:22:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.397 21:22:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.397 21:22:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.397 21:22:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.397 21:22:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.397 21:22:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.397 21:22:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.397 21:22:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.397 21:22:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.397 21:22:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.397 21:22:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:43.397 21:22:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:43.397 21:22:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.397 21:22:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.397 21:22:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.397 21:22:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.397 21:22:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.397 21:22:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.397 21:22:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.397 21:22:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.397 21:22:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.397 21:22:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.397 21:22:21 -- paths/export.sh@5 -- # export PATH 00:26:43.397 21:22:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.397 21:22:21 -- nvmf/common.sh@46 -- # : 0 00:26:43.397 21:22:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:43.397 21:22:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:43.397 21:22:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:43.397 21:22:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.397 21:22:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.397 21:22:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:43.397 21:22:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:43.397 21:22:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:43.397 21:22:21 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:43.397 21:22:21 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:43.397 21:22:21 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:43.397 21:22:21 -- host/perf.sh@17 -- # nvmftestinit 00:26:43.397 21:22:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:43.397 21:22:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.397 21:22:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:43.397 21:22:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:43.397 21:22:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:43.397 21:22:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.397 21:22:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.397 21:22:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.397 21:22:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:43.397 21:22:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:43.397 21:22:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:43.397 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:26:49.985 21:22:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:49.985 21:22:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:49.985 21:22:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:49.985 21:22:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:49.985 21:22:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:49.985 21:22:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:49.985 21:22:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:49.985 21:22:27 -- nvmf/common.sh@294 -- # net_devs=() 00:26:49.985 21:22:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:49.985 21:22:27 -- nvmf/common.sh@295 -- # e810=() 00:26:49.985 21:22:27 -- nvmf/common.sh@295 -- # local -ga e810 00:26:49.985 21:22:27 -- nvmf/common.sh@296 -- # x722=() 00:26:49.985 21:22:27 -- nvmf/common.sh@296 -- # local -ga x722 00:26:49.985 21:22:27 -- nvmf/common.sh@297 -- # mlx=() 00:26:49.985 21:22:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:49.985 21:22:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.985 21:22:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:49.985 21:22:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:49.985 21:22:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:49.985 21:22:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.985 21:22:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:49.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:49.985 21:22:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.985 21:22:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.985 21:22:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:49.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:49.985 21:22:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:49.986 21:22:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.986 21:22:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.986 21:22:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.986 21:22:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.986 21:22:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:49.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:49.986 21:22:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.986 21:22:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.986 21:22:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.986 21:22:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.986 21:22:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.986 21:22:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:49.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:49.986 21:22:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.986 21:22:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:49.986 21:22:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:49.986 21:22:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:49.986 21:22:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:49.986 21:22:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.986 21:22:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.986 21:22:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.986 21:22:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:49.986 21:22:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.986 21:22:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.986 21:22:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:49.986 21:22:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.986 21:22:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.986 21:22:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:49.986 21:22:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:49.986 21:22:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.986 21:22:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.986 21:22:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.986 21:22:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.246 21:22:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:50.246 21:22:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.246 21:22:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.246 21:22:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.246 21:22:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:50.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:26:50.246 00:26:50.246 --- 10.0.0.2 ping statistics --- 00:26:50.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.246 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:26:50.246 21:22:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:26:50.246 00:26:50.246 --- 10.0.0.1 ping statistics --- 00:26:50.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.246 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:26:50.246 21:22:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.246 21:22:28 -- nvmf/common.sh@410 -- # return 0 00:26:50.246 21:22:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:50.246 21:22:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.246 21:22:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:50.246 21:22:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:50.246 21:22:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.246 21:22:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:50.246 21:22:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:50.246 21:22:28 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:50.246 21:22:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:50.247 21:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:50.247 21:22:28 -- common/autotest_common.sh@10 -- # set +x 00:26:50.247 21:22:28 -- nvmf/common.sh@469 -- # nvmfpid=2511415 00:26:50.247 21:22:28 -- nvmf/common.sh@470 -- # waitforlisten 2511415 00:26:50.247 21:22:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.247 21:22:28 -- common/autotest_common.sh@819 -- # '[' -z 2511415 ']' 00:26:50.247 21:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.247 21:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:50.247 21:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.247 21:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:50.247 21:22:28 -- common/autotest_common.sh@10 -- # set +x 00:26:50.247 [2024-06-08 21:22:28.331701] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:50.247 [2024-06-08 21:22:28.331765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.507 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.507 [2024-06-08 21:22:28.401184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.507 [2024-06-08 21:22:28.474226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:50.507 [2024-06-08 21:22:28.474362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.507 [2024-06-08 21:22:28.474372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.507 [2024-06-08 21:22:28.474380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.507 [2024-06-08 21:22:28.474523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.507 [2024-06-08 21:22:28.474737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.507 [2024-06-08 21:22:28.474894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.507 [2024-06-08 21:22:28.474894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.077 21:22:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.077 21:22:29 -- common/autotest_common.sh@852 -- # return 0 00:26:51.077 21:22:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:51.077 21:22:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:51.077 21:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:51.077 21:22:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.077 21:22:29 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:51.077 21:22:29 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:51.646 21:22:29 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:51.646 21:22:29 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:51.906 21:22:29 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:51.906 21:22:29 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:51.906 21:22:29 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:51.906 21:22:29 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:51.906 21:22:29 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:51.906 21:22:29 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:51.906 21:22:29 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:52.167 [2024-06-08 21:22:30.097646] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.167 21:22:30 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.427 21:22:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.427 21:22:30 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.428 21:22:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:52.428 21:22:30 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:52.687 21:22:30 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.687 [2024-06-08 21:22:30.756256] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.947 21:22:30 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:52.947 21:22:30 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:52.947 21:22:30 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:52.947 21:22:30 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:52.947 21:22:30 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:54.348 Initializing NVMe Controllers 00:26:54.348 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:54.348 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:54.348 Initialization complete. Launching workers. 00:26:54.348 ======================================================== 00:26:54.348 Latency(us) 00:26:54.348 Device Information : IOPS MiB/s Average min max 00:26:54.348 PCIE (0000:65:00.0) NSID 1 from core 0: 80888.68 315.97 394.81 13.27 4981.24 00:26:54.348 ======================================================== 00:26:54.348 Total : 80888.68 315.97 394.81 13.27 4981.24 00:26:54.348 00:26:54.348 21:22:32 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.348 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.755 Initializing NVMe Controllers 00:26:55.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.755 Initialization complete. Launching workers. 00:26:55.755 ======================================================== 00:26:55.755 Latency(us) 00:26:55.755 Device Information : IOPS MiB/s Average min max 00:26:55.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.88 0.29 13996.52 260.99 45146.57 00:26:55.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.89 0.26 15187.60 6981.80 47905.06 00:26:55.755 ======================================================== 00:26:55.755 Total : 140.77 0.55 14562.49 260.99 47905.06 00:26:55.755 00:26:55.755 21:22:33 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.755 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.141 Initializing NVMe Controllers 00:26:57.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.141 Initialization complete. Launching workers. 00:26:57.141 ======================================================== 00:26:57.141 Latency(us) 00:26:57.141 Device Information : IOPS MiB/s Average min max 00:26:57.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8448.99 33.00 3793.39 658.28 8187.55 00:26:57.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3959.00 15.46 8119.11 6410.39 15806.21 00:26:57.141 ======================================================== 00:26:57.141 Total : 12407.99 48.47 5173.59 658.28 15806.21 00:26:57.141 00:26:57.141 21:22:35 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:57.141 21:22:35 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:57.142 21:22:35 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.142 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.688 Initializing NVMe Controllers 00:26:59.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.688 Controller IO queue size 128, less than required. 00:26:59.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.688 Controller IO queue size 128, less than required. 00:26:59.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:59.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:59.688 Initialization complete. Launching workers. 00:26:59.688 ======================================================== 00:26:59.688 Latency(us) 00:26:59.688 Device Information : IOPS MiB/s Average min max 00:26:59.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 874.86 218.72 150884.59 75769.77 227294.75 00:26:59.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 541.61 135.40 241145.55 70367.32 365685.34 00:26:59.688 ======================================================== 00:26:59.688 Total : 1416.47 354.12 185397.07 70367.32 365685.34 00:26:59.688 00:26:59.688 21:22:37 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:59.688 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.948 No valid NVMe controllers or AIO or URING devices found 00:26:59.948 Initializing NVMe Controllers 00:26:59.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.948 Controller IO queue size 128, less than required. 00:26:59.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.948 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:59.948 Controller IO queue size 128, less than required. 00:26:59.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.948 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:59.948 WARNING: Some requested NVMe devices were skipped 00:26:59.948 21:22:37 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:59.948 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.491 Initializing NVMe Controllers 00:27:02.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.491 Controller IO queue size 128, less than required. 00:27:02.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.491 Controller IO queue size 128, less than required. 00:27:02.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.491 Initialization complete. Launching workers. 00:27:02.491 00:27:02.491 ==================== 00:27:02.491 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:02.491 TCP transport: 00:27:02.491 polls: 41047 00:27:02.491 idle_polls: 13407 00:27:02.491 sock_completions: 27640 00:27:02.491 nvme_completions: 3277 00:27:02.491 submitted_requests: 4992 00:27:02.491 queued_requests: 1 00:27:02.491 00:27:02.491 ==================== 00:27:02.491 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:02.491 TCP transport: 00:27:02.491 polls: 41302 00:27:02.491 idle_polls: 14208 00:27:02.491 sock_completions: 27094 00:27:02.492 nvme_completions: 3476 00:27:02.492 submitted_requests: 5413 00:27:02.492 queued_requests: 1 00:27:02.492 ======================================================== 00:27:02.492 Latency(us) 00:27:02.492 Device Information : IOPS MiB/s Average min max 00:27:02.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 882.33 220.58 149341.45 81976.32 238813.96 00:27:02.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 932.32 233.08 139931.08 77521.00 216359.17 00:27:02.492 ======================================================== 00:27:02.492 Total : 1814.65 453.66 144506.64 77521.00 238813.96 00:27:02.492 00:27:02.492 21:22:40 -- host/perf.sh@66 -- # sync 00:27:02.492 21:22:40 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.492 21:22:40 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:02.492 21:22:40 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:02.492 21:22:40 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:03.877 21:22:41 -- host/perf.sh@72 -- # ls_guid=3341ae68-1244-4f48-85c1-ef8ba4a65829 00:27:03.877 21:22:41 -- host/perf.sh@73 -- # get_lvs_free_mb 3341ae68-1244-4f48-85c1-ef8ba4a65829 00:27:03.877 21:22:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3341ae68-1244-4f48-85c1-ef8ba4a65829 00:27:03.877 21:22:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:03.877 21:22:41 -- common/autotest_common.sh@1345 -- # local fc 00:27:03.877 21:22:41 -- common/autotest_common.sh@1346 -- # local cs 00:27:03.878 21:22:41 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.878 21:22:41 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:03.878 { 00:27:03.878 "uuid": "3341ae68-1244-4f48-85c1-ef8ba4a65829", 00:27:03.878 "name": "lvs_0", 00:27:03.878 "base_bdev": "Nvme0n1", 00:27:03.878 "total_data_clusters": 457407, 00:27:03.878 "free_clusters": 457407, 00:27:03.878 "block_size": 512, 00:27:03.878 "cluster_size": 4194304 00:27:03.878 } 00:27:03.878 ]' 00:27:03.878 21:22:41 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3341ae68-1244-4f48-85c1-ef8ba4a65829") .free_clusters' 00:27:03.878 21:22:41 -- common/autotest_common.sh@1348 -- # fc=457407 00:27:03.878 21:22:41 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3341ae68-1244-4f48-85c1-ef8ba4a65829") .cluster_size' 00:27:03.878 21:22:41 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:03.878 21:22:41 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:27:03.878 21:22:41 -- common/autotest_common.sh@1353 -- # echo 1829628 00:27:03.878 1829628 00:27:03.878 21:22:41 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:03.878 21:22:41 -- host/perf.sh@78 -- # free_mb=20480 00:27:03.878 21:22:41 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3341ae68-1244-4f48-85c1-ef8ba4a65829 lbd_0 20480 00:27:04.139 21:22:41 -- host/perf.sh@80 -- # lb_guid=823e9053-9ea8-4ee1-ba3f-d518550310ba 00:27:04.139 21:22:41 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 823e9053-9ea8-4ee1-ba3f-d518550310ba lvs_n_0 00:27:06.049 21:22:43 -- host/perf.sh@83 -- # ls_nested_guid=ac0a66e2-82f8-4402-a42f-c18eb5e1b001 00:27:06.049 21:22:43 -- host/perf.sh@84 -- # get_lvs_free_mb ac0a66e2-82f8-4402-a42f-c18eb5e1b001 00:27:06.049 21:22:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ac0a66e2-82f8-4402-a42f-c18eb5e1b001 00:27:06.049 21:22:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:06.049 21:22:43 -- common/autotest_common.sh@1345 -- # local fc 00:27:06.049 21:22:43 -- common/autotest_common.sh@1346 -- # local cs 00:27:06.049 21:22:43 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:06.049 21:22:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:06.049 { 00:27:06.049 "uuid": "3341ae68-1244-4f48-85c1-ef8ba4a65829", 00:27:06.049 "name": "lvs_0", 00:27:06.049 "base_bdev": "Nvme0n1", 00:27:06.049 "total_data_clusters": 457407, 00:27:06.049 "free_clusters": 452287, 00:27:06.049 "block_size": 512, 00:27:06.049 "cluster_size": 4194304 00:27:06.049 }, 00:27:06.049 { 00:27:06.049 "uuid": "ac0a66e2-82f8-4402-a42f-c18eb5e1b001", 00:27:06.049 "name": "lvs_n_0", 00:27:06.049 "base_bdev": "823e9053-9ea8-4ee1-ba3f-d518550310ba", 00:27:06.049 "total_data_clusters": 5114, 00:27:06.049 "free_clusters": 5114, 00:27:06.049 "block_size": 512, 00:27:06.049 "cluster_size": 4194304 00:27:06.049 } 00:27:06.049 ]' 00:27:06.050 21:22:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ac0a66e2-82f8-4402-a42f-c18eb5e1b001") .free_clusters' 00:27:06.050 21:22:43 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:06.050 21:22:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ac0a66e2-82f8-4402-a42f-c18eb5e1b001") .cluster_size' 00:27:06.050 21:22:43 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:06.050 21:22:43 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:06.050 21:22:43 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:06.050 20456 00:27:06.050 21:22:43 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:06.050 21:22:43 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac0a66e2-82f8-4402-a42f-c18eb5e1b001 lbd_nest_0 20456 00:27:06.050 21:22:44 -- host/perf.sh@88 -- # lb_nested_guid=b1aefb0f-2e8e-4c6a-ae02-e2a0395d976e 00:27:06.050 21:22:44 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.310 21:22:44 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:06.310 21:22:44 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b1aefb0f-2e8e-4c6a-ae02-e2a0395d976e 00:27:06.310 21:22:44 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.571 21:22:44 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:06.571 21:22:44 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:06.571 21:22:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:06.571 21:22:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.571 21:22:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.571 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.806 Initializing NVMe Controllers 00:27:18.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.806 Initialization complete. Launching workers. 00:27:18.806 ======================================================== 00:27:18.806 Latency(us) 00:27:18.806 Device Information : IOPS MiB/s Average min max 00:27:18.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.49 0.02 21124.67 269.03 45922.10 00:27:18.806 ======================================================== 00:27:18.806 Total : 47.49 0.02 21124.67 269.03 45922.10 00:27:18.806 00:27:18.806 21:22:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:18.806 21:22:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:18.806 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.860 Initializing NVMe Controllers 00:27:28.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:28.860 Initialization complete. Launching workers. 00:27:28.860 ======================================================== 00:27:28.860 Latency(us) 00:27:28.860 Device Information : IOPS MiB/s Average min max 00:27:28.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.60 9.95 12573.14 7014.89 47887.53 00:27:28.860 ======================================================== 00:27:28.860 Total : 79.60 9.95 12573.14 7014.89 47887.53 00:27:28.860 00:27:28.860 21:23:05 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:28.860 21:23:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:28.860 21:23:05 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:28.860 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.866 Initializing NVMe Controllers 00:27:38.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.866 Initialization complete. Launching workers. 00:27:38.866 ======================================================== 00:27:38.866 Latency(us) 00:27:38.866 Device Information : IOPS MiB/s Average min max 00:27:38.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8948.98 4.37 3576.08 346.82 10356.12 00:27:38.866 ======================================================== 00:27:38.866 Total : 8948.98 4.37 3576.08 346.82 10356.12 00:27:38.866 00:27:38.866 21:23:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:38.866 21:23:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.866 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.006 Initializing NVMe Controllers 00:27:49.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:49.006 Initialization complete. Launching workers. 00:27:49.006 ======================================================== 00:27:49.007 Latency(us) 00:27:49.007 Device Information : IOPS MiB/s Average min max 00:27:49.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1701.07 212.63 18833.95 900.34 41710.38 00:27:49.007 ======================================================== 00:27:49.007 Total : 1701.07 212.63 18833.95 900.34 41710.38 00:27:49.007 00:27:49.007 21:23:25 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:49.007 21:23:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:49.007 21:23:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:49.007 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.010 Initializing NVMe Controllers 00:27:59.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.010 Controller IO queue size 128, less than required. 00:27:59.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.010 Initialization complete. Launching workers. 00:27:59.010 ======================================================== 00:27:59.010 Latency(us) 00:27:59.010 Device Information : IOPS MiB/s Average min max 00:27:59.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15889.90 7.76 8059.79 3196.55 19026.93 00:27:59.010 ======================================================== 00:27:59.010 Total : 15889.90 7.76 8059.79 3196.55 19026.93 00:27:59.010 00:27:59.010 21:23:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:59.011 21:23:36 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.011 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.010 Initializing NVMe Controllers 00:28:09.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.010 Controller IO queue size 128, less than required. 00:28:09.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.010 Initialization complete. Launching workers. 00:28:09.010 ======================================================== 00:28:09.010 Latency(us) 00:28:09.010 Device Information : IOPS MiB/s Average min max 00:28:09.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.53 143.57 112119.94 22744.83 235831.91 00:28:09.010 ======================================================== 00:28:09.010 Total : 1148.53 143.57 112119.94 22744.83 235831.91 00:28:09.010 00:28:09.010 21:23:46 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.010 21:23:46 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1aefb0f-2e8e-4c6a-ae02-e2a0395d976e 00:28:10.394 21:23:48 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:10.655 21:23:48 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 823e9053-9ea8-4ee1-ba3f-d518550310ba 00:28:10.655 21:23:48 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:10.916 21:23:48 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:10.916 21:23:48 -- host/perf.sh@114 -- # nvmftestfini 00:28:10.916 21:23:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:10.916 21:23:48 -- nvmf/common.sh@116 -- # sync 00:28:10.916 21:23:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:10.916 21:23:48 -- nvmf/common.sh@119 -- # set +e 00:28:10.916 21:23:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:10.916 21:23:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:10.916 rmmod nvme_tcp 00:28:10.916 rmmod nvme_fabrics 00:28:10.916 rmmod nvme_keyring 00:28:10.916 21:23:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:10.916 21:23:48 -- nvmf/common.sh@123 -- # set -e 00:28:10.916 21:23:48 -- nvmf/common.sh@124 -- # return 0 00:28:10.916 21:23:48 -- nvmf/common.sh@477 -- # '[' -n 2511415 ']' 00:28:10.916 21:23:48 -- nvmf/common.sh@478 -- # killprocess 2511415 00:28:10.916 21:23:48 -- common/autotest_common.sh@926 -- # '[' -z 2511415 ']' 00:28:10.916 21:23:48 -- common/autotest_common.sh@930 -- # kill -0 2511415 00:28:10.916 21:23:48 -- common/autotest_common.sh@931 -- # uname 00:28:10.916 21:23:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:10.916 21:23:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2511415 00:28:10.916 21:23:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:10.916 21:23:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:10.916 21:23:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2511415' 00:28:10.916 killing process with pid 2511415 00:28:10.916 21:23:48 -- common/autotest_common.sh@945 -- # kill 2511415 00:28:10.916 21:23:48 -- common/autotest_common.sh@950 -- # wait 2511415 00:28:13.462 21:23:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:13.462 21:23:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:13.462 21:23:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:13.462 21:23:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.462 21:23:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:13.462 21:23:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.462 21:23:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.462 21:23:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.373 21:23:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:15.373 00:28:15.373 real 1m32.034s 00:28:15.373 user 5m25.080s 00:28:15.373 sys 0m13.876s 00:28:15.373 21:23:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.373 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:28:15.373 ************************************ 00:28:15.373 END TEST nvmf_perf 00:28:15.373 ************************************ 00:28:15.373 21:23:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:15.373 21:23:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:15.373 21:23:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:15.373 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:28:15.373 ************************************ 00:28:15.373 START TEST nvmf_fio_host 00:28:15.373 ************************************ 00:28:15.373 21:23:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:15.373 * Looking for test storage... 00:28:15.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.373 21:23:53 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.373 21:23:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.373 21:23:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.373 21:23:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.373 21:23:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.373 21:23:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.373 21:23:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.373 21:23:53 -- paths/export.sh@5 -- # export PATH 00:28:15.374 21:23:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.374 21:23:53 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.374 21:23:53 -- nvmf/common.sh@7 -- # uname -s 00:28:15.374 21:23:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.374 21:23:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.374 21:23:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.374 21:23:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.374 21:23:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.374 21:23:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.374 21:23:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.374 21:23:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.374 21:23:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.374 21:23:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.374 21:23:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:15.374 21:23:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:15.374 21:23:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.374 21:23:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.374 21:23:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.374 21:23:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.374 21:23:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.374 21:23:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.374 21:23:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.374 21:23:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.374 21:23:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.374 21:23:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.374 21:23:53 -- paths/export.sh@5 -- # export PATH 00:28:15.374 21:23:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.374 21:23:53 -- nvmf/common.sh@46 -- # : 0 00:28:15.374 21:23:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:15.374 21:23:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:15.374 21:23:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:15.374 21:23:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.374 21:23:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.374 21:23:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:15.374 21:23:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:15.374 21:23:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:15.374 21:23:53 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.374 21:23:53 -- host/fio.sh@14 -- # nvmftestinit 00:28:15.374 21:23:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:15.374 21:23:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.374 21:23:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:15.374 21:23:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:15.374 21:23:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:15.374 21:23:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.374 21:23:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.374 21:23:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.374 21:23:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:15.374 21:23:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:15.374 21:23:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:15.374 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:28:21.963 21:23:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:21.963 21:23:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:21.963 21:23:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:21.963 21:23:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:21.964 21:23:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:21.964 21:23:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:21.964 21:23:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:21.964 21:23:59 -- nvmf/common.sh@294 -- # net_devs=() 00:28:21.964 21:23:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:21.964 21:23:59 -- nvmf/common.sh@295 -- # e810=() 00:28:21.964 21:23:59 -- nvmf/common.sh@295 -- # local -ga e810 00:28:21.964 21:23:59 -- nvmf/common.sh@296 -- # x722=() 00:28:21.964 21:23:59 -- nvmf/common.sh@296 -- # local -ga x722 00:28:21.964 21:23:59 -- nvmf/common.sh@297 -- # mlx=() 00:28:21.964 21:23:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:21.964 21:23:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.964 21:23:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:21.964 21:23:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:21.964 21:23:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:21.964 21:23:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:21.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:21.964 21:23:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:21.964 21:23:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:21.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:21.964 21:23:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:21.964 21:23:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.964 21:23:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.964 21:23:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:21.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:21.964 21:23:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.964 21:23:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:21.964 21:23:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.964 21:23:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.964 21:23:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:21.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:21.964 21:23:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.964 21:23:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:21.964 21:23:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:21.964 21:23:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:21.964 21:23:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.964 21:23:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.964 21:23:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.964 21:23:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:21.964 21:23:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.964 21:23:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.964 21:23:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:21.964 21:23:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.964 21:23:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.964 21:23:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:21.964 21:23:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:21.964 21:23:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.964 21:23:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.964 21:23:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.964 21:23:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.964 21:24:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:21.964 21:24:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.226 21:24:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.226 21:24:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.226 21:24:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:22.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:28:22.226 00:28:22.226 --- 10.0.0.2 ping statistics --- 00:28:22.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.226 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:28:22.226 21:24:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:28:22.226 00:28:22.226 --- 10.0.0.1 ping statistics --- 00:28:22.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.226 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:22.226 21:24:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.226 21:24:00 -- nvmf/common.sh@410 -- # return 0 00:28:22.226 21:24:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:22.226 21:24:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.226 21:24:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:22.226 21:24:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:22.226 21:24:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.226 21:24:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:22.226 21:24:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:22.226 21:24:00 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:22.226 21:24:00 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:22.226 21:24:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:22.226 21:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:22.226 21:24:00 -- host/fio.sh@24 -- # nvmfpid=2531463 00:28:22.226 21:24:00 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.226 21:24:00 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:22.226 21:24:00 -- host/fio.sh@28 -- # waitforlisten 2531463 00:28:22.226 21:24:00 -- common/autotest_common.sh@819 -- # '[' -z 2531463 ']' 00:28:22.226 21:24:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.226 21:24:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:22.226 21:24:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.226 21:24:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:22.226 21:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:22.226 [2024-06-08 21:24:00.274674] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:22.226 [2024-06-08 21:24:00.274739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.226 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.487 [2024-06-08 21:24:00.345626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.487 [2024-06-08 21:24:00.419871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:22.487 [2024-06-08 21:24:00.420003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.487 [2024-06-08 21:24:00.420013] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.487 [2024-06-08 21:24:00.420021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.487 [2024-06-08 21:24:00.420195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.487 [2024-06-08 21:24:00.420313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.488 [2024-06-08 21:24:00.420467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.488 [2024-06-08 21:24:00.420468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.060 21:24:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:23.060 21:24:01 -- common/autotest_common.sh@852 -- # return 0 00:28:23.060 21:24:01 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:23.321 [2024-06-08 21:24:01.174854] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.321 21:24:01 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:23.321 21:24:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:23.321 21:24:01 -- common/autotest_common.sh@10 -- # set +x 00:28:23.321 21:24:01 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:23.321 Malloc1 00:28:23.582 21:24:01 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.582 21:24:01 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:23.843 21:24:01 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.843 [2024-06-08 21:24:01.880455] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.843 21:24:01 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.105 21:24:02 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:24.105 21:24:02 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.105 21:24:02 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.105 21:24:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:24.105 21:24:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.105 21:24:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:24.105 21:24:02 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.105 21:24:02 -- common/autotest_common.sh@1320 -- # shift 00:28:24.105 21:24:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:24.105 21:24:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:24.105 21:24:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:24.105 21:24:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:24.105 21:24:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:24.105 21:24:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:24.105 21:24:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:24.105 21:24:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:24.365 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:24.365 fio-3.35 00:28:24.365 Starting 1 thread 00:28:24.627 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.169 00:28:27.169 test: (groupid=0, jobs=1): err= 0: pid=2532278: Sat Jun 8 21:24:04 2024 00:28:27.169 read: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(110MiB/2004msec) 00:28:27.169 slat (usec): min=2, max=275, avg= 2.17, stdev= 2.31 00:28:27.169 clat (usec): min=3039, max=9982, avg=5197.24, stdev=926.73 00:28:27.169 lat (usec): min=3041, max=9984, avg=5199.41, stdev=926.81 00:28:27.169 clat percentiles (usec): 00:28:27.169 | 1.00th=[ 3687], 5.00th=[ 4113], 10.00th=[ 4293], 20.00th=[ 4490], 00:28:27.169 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5145], 00:28:27.170 | 70.00th=[ 5342], 80.00th=[ 5735], 90.00th=[ 6587], 95.00th=[ 7177], 00:28:27.170 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9503], 00:28:27.170 | 99.99th=[ 9896] 00:28:27.170 bw ( KiB/s): min=47672, max=59032, per=99.91%, avg=55916.00, stdev=5503.28, samples=4 00:28:27.170 iops : min=11918, max=14758, avg=13979.00, stdev=1375.82, samples=4 00:28:27.170 write: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(110MiB/2004msec); 0 zone resets 00:28:27.170 slat (usec): min=2, max=208, avg= 2.27, stdev= 1.47 00:28:27.170 clat (usec): min=1947, max=7546, avg=3895.52, stdev=702.35 00:28:27.170 lat (usec): min=1949, max=7548, avg=3897.79, stdev=702.46 00:28:27.170 clat percentiles (usec): 00:28:27.170 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 3130], 20.00th=[ 3392], 00:28:27.170 | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 3916], 00:28:27.170 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5473], 00:28:27.170 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 6980], 00:28:27.170 | 99.99th=[ 7177] 00:28:27.170 bw ( KiB/s): min=48304, max=59056, per=100.00%, avg=56008.00, stdev=5147.31, samples=4 00:28:27.170 iops : min=12076, max=14764, avg=14002.00, stdev=1286.83, samples=4 00:28:27.170 lat (msec) : 2=0.01%, 4=34.51%, 10=65.48% 00:28:27.170 cpu : usr=66.35%, sys=26.91%, ctx=28, majf=0, minf=6 00:28:27.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:27.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:27.170 issued rwts: total=28038,28051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:27.170 00:28:27.170 Run status group 0 (all jobs): 00:28:27.170 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=110MiB (115MB), run=2004-2004msec 00:28:27.170 WRITE: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=110MiB (115MB), run=2004-2004msec 00:28:27.170 21:24:04 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.170 21:24:04 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.170 21:24:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:27.170 21:24:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:27.170 21:24:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:27.170 21:24:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.170 21:24:04 -- common/autotest_common.sh@1320 -- # shift 00:28:27.170 21:24:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:27.170 21:24:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:27.170 21:24:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:27.170 21:24:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:27.170 21:24:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:27.170 21:24:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:27.170 21:24:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:27.170 21:24:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:27.170 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:27.170 fio-3.35 00:28:27.170 Starting 1 thread 00:28:27.430 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.977 00:28:29.977 test: (groupid=0, jobs=1): err= 0: pid=2532927: Sat Jun 8 21:24:07 2024 00:28:29.977 read: IOPS=8695, BW=136MiB/s (142MB/s)(273MiB/2006msec) 00:28:29.977 slat (usec): min=3, max=111, avg= 3.68, stdev= 1.92 00:28:29.977 clat (usec): min=1219, max=27768, avg=9175.41, stdev=2571.92 00:28:29.977 lat (usec): min=1223, max=27772, avg=9179.08, stdev=2572.33 00:28:29.977 clat percentiles (usec): 00:28:29.977 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6915], 00:28:29.977 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:28:29.977 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12911], 95.00th=[14353], 00:28:29.977 | 99.00th=[16319], 99.50th=[16581], 99.90th=[17171], 99.95th=[17433], 00:28:29.977 | 99.99th=[27395] 00:28:29.977 bw ( KiB/s): min=60448, max=75680, per=49.93%, avg=69472.00, stdev=6711.97, samples=4 00:28:29.977 iops : min= 3778, max= 4730, avg=4342.00, stdev=419.50, samples=4 00:28:29.977 write: IOPS=5023, BW=78.5MiB/s (82.3MB/s)(141MiB/1794msec); 0 zone resets 00:28:29.977 slat (usec): min=39, max=445, avg=41.24, stdev= 9.32 00:28:29.977 clat (usec): min=2660, max=22322, avg=9815.32, stdev=1999.42 00:28:29.977 lat (usec): min=2700, max=22366, avg=9856.56, stdev=2002.68 00:28:29.978 clat percentiles (usec): 00:28:29.978 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8225], 00:28:29.978 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10028], 00:28:29.978 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12125], 95.00th=[13173], 00:28:29.978 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:28:29.978 | 99.99th=[22414] 00:28:29.978 bw ( KiB/s): min=63136, max=78272, per=89.70%, avg=72096.00, stdev=6855.12, samples=4 00:28:29.978 iops : min= 3946, max= 4892, avg=4506.00, stdev=428.45, samples=4 00:28:29.978 lat (msec) : 2=0.01%, 4=0.25%, 10=65.37%, 20=34.36%, 50=0.02% 00:28:29.978 cpu : usr=81.05%, sys=14.11%, ctx=13, majf=0, minf=11 00:28:29.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:29.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:29.978 issued rwts: total=17444,9012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:29.978 00:28:29.978 Run status group 0 (all jobs): 00:28:29.978 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=273MiB (286MB), run=2006-2006msec 00:28:29.978 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=141MiB (148MB), run=1794-1794msec 00:28:29.978 21:24:07 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.978 21:24:07 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:29.978 21:24:07 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:29.978 21:24:07 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:29.978 21:24:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:29.978 21:24:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:29.978 21:24:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:29.978 21:24:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:29.978 21:24:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:29.978 21:24:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:29.978 21:24:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:28:29.978 21:24:07 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:30.550 Nvme0n1 00:28:30.550 21:24:08 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:31.123 21:24:08 -- host/fio.sh@53 -- # ls_guid=419eaa62-029e-44ff-a0b1-597ad5b83734 00:28:31.123 21:24:08 -- host/fio.sh@54 -- # get_lvs_free_mb 419eaa62-029e-44ff-a0b1-597ad5b83734 00:28:31.123 21:24:08 -- common/autotest_common.sh@1343 -- # local lvs_uuid=419eaa62-029e-44ff-a0b1-597ad5b83734 00:28:31.123 21:24:08 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:31.123 21:24:08 -- common/autotest_common.sh@1345 -- # local fc 00:28:31.123 21:24:08 -- common/autotest_common.sh@1346 -- # local cs 00:28:31.123 21:24:08 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:31.123 21:24:09 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:31.123 { 00:28:31.123 "uuid": "419eaa62-029e-44ff-a0b1-597ad5b83734", 00:28:31.123 "name": "lvs_0", 00:28:31.123 "base_bdev": "Nvme0n1", 00:28:31.123 "total_data_clusters": 1787, 00:28:31.123 "free_clusters": 1787, 00:28:31.123 "block_size": 512, 00:28:31.123 "cluster_size": 1073741824 00:28:31.123 } 00:28:31.123 ]' 00:28:31.123 21:24:09 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="419eaa62-029e-44ff-a0b1-597ad5b83734") .free_clusters' 00:28:31.123 21:24:09 -- common/autotest_common.sh@1348 -- # fc=1787 00:28:31.123 21:24:09 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="419eaa62-029e-44ff-a0b1-597ad5b83734") .cluster_size' 00:28:31.123 21:24:09 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:31.123 21:24:09 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:28:31.123 21:24:09 -- common/autotest_common.sh@1353 -- # echo 1829888 00:28:31.123 1829888 00:28:31.123 21:24:09 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:31.384 0b329d1a-e00f-43f1-b6e7-60ea5766f8f2 00:28:31.384 21:24:09 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:31.645 21:24:09 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:31.645 21:24:09 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:31.906 21:24:09 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:31.906 21:24:09 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:31.906 21:24:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:31.906 21:24:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.906 21:24:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:31.906 21:24:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.906 21:24:09 -- common/autotest_common.sh@1320 -- # shift 00:28:31.906 21:24:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:31.906 21:24:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.906 21:24:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.906 21:24:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.906 21:24:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.906 21:24:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.906 21:24:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:31.907 21:24:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:32.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:32.166 fio-3.35 00:28:32.166 Starting 1 thread 00:28:32.426 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.012 00:28:35.012 test: (groupid=0, jobs=1): err= 0: pid=2534454: Sat Jun 8 21:24:12 2024 00:28:35.012 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(84.9MiB/2005msec) 00:28:35.012 slat (usec): min=2, max=109, avg= 2.24, stdev= 1.01 00:28:35.012 clat (usec): min=3945, max=12751, avg=6687.73, stdev=974.97 00:28:35.012 lat (usec): min=3960, max=12753, avg=6689.97, stdev=974.96 00:28:35.012 clat percentiles (usec): 00:28:35.012 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5669], 20.00th=[ 5997], 00:28:35.012 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6718], 00:28:35.012 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7832], 95.00th=[ 8455], 00:28:35.012 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12256], 99.95th=[12387], 00:28:35.012 | 99.99th=[12649] 00:28:35.012 bw ( KiB/s): min=41664, max=44288, per=99.95%, avg=43360.00, stdev=1175.83, samples=4 00:28:35.012 iops : min=10416, max=11072, avg=10840.00, stdev=293.96, samples=4 00:28:35.012 write: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(84.8MiB/2005msec); 0 zone resets 00:28:35.012 slat (nsec): min=2124, max=94642, avg=2346.03, stdev=686.08 00:28:35.012 clat (usec): min=1219, max=9242, avg=5034.30, stdev=635.64 00:28:35.012 lat (usec): min=1230, max=9244, avg=5036.64, stdev=635.65 00:28:35.012 clat percentiles (usec): 00:28:35.012 | 1.00th=[ 3359], 5.00th=[ 3982], 10.00th=[ 4293], 20.00th=[ 4555], 00:28:35.012 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 5080], 60.00th=[ 5211], 00:28:35.012 | 70.00th=[ 5342], 80.00th=[ 5538], 90.00th=[ 5735], 95.00th=[ 5997], 00:28:35.012 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 8586], 00:28:35.012 | 99.99th=[ 9241] 00:28:35.012 bw ( KiB/s): min=42192, max=43768, per=99.97%, avg=43284.00, stdev=733.28, samples=4 00:28:35.012 iops : min=10548, max=10942, avg=10821.00, stdev=183.32, samples=4 00:28:35.012 lat (msec) : 2=0.01%, 4=2.65%, 10=96.81%, 20=0.53% 00:28:35.012 cpu : usr=66.97%, sys=27.15%, ctx=29, majf=0, minf=6 00:28:35.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:35.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:35.013 issued rwts: total=21746,21703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:35.013 00:28:35.013 Run status group 0 (all jobs): 00:28:35.013 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=84.9MiB (89.1MB), run=2005-2005msec 00:28:35.013 WRITE: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=84.8MiB (88.9MB), run=2005-2005msec 00:28:35.013 21:24:12 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:35.013 21:24:12 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:35.989 21:24:13 -- host/fio.sh@64 -- # ls_nested_guid=ea63f8c7-14c1-41b2-809f-dad9635c0ea5 00:28:35.989 21:24:13 -- host/fio.sh@65 -- # get_lvs_free_mb ea63f8c7-14c1-41b2-809f-dad9635c0ea5 00:28:35.989 21:24:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ea63f8c7-14c1-41b2-809f-dad9635c0ea5 00:28:35.989 21:24:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:35.989 21:24:13 -- common/autotest_common.sh@1345 -- # local fc 00:28:35.989 21:24:13 -- common/autotest_common.sh@1346 -- # local cs 00:28:35.989 21:24:13 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:35.989 21:24:13 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:35.989 { 00:28:35.989 "uuid": "419eaa62-029e-44ff-a0b1-597ad5b83734", 00:28:35.989 "name": "lvs_0", 00:28:35.989 "base_bdev": "Nvme0n1", 00:28:35.989 "total_data_clusters": 1787, 00:28:35.989 "free_clusters": 0, 00:28:35.989 "block_size": 512, 00:28:35.989 "cluster_size": 1073741824 00:28:35.989 }, 00:28:35.989 { 00:28:35.989 "uuid": "ea63f8c7-14c1-41b2-809f-dad9635c0ea5", 00:28:35.989 "name": "lvs_n_0", 00:28:35.989 "base_bdev": "0b329d1a-e00f-43f1-b6e7-60ea5766f8f2", 00:28:35.989 "total_data_clusters": 457025, 00:28:35.989 "free_clusters": 457025, 00:28:35.989 "block_size": 512, 00:28:35.989 "cluster_size": 4194304 00:28:35.989 } 00:28:35.989 ]' 00:28:35.989 21:24:13 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ea63f8c7-14c1-41b2-809f-dad9635c0ea5") .free_clusters' 00:28:35.989 21:24:13 -- common/autotest_common.sh@1348 -- # fc=457025 00:28:35.989 21:24:13 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ea63f8c7-14c1-41b2-809f-dad9635c0ea5") .cluster_size' 00:28:35.989 21:24:13 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:35.989 21:24:13 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:28:35.989 21:24:13 -- common/autotest_common.sh@1353 -- # echo 1828100 00:28:35.989 1828100 00:28:35.989 21:24:13 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:36.932 b7b1e660-9ea6-4fc0-b945-14156d07027e 00:28:36.932 21:24:14 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:37.194 21:24:15 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:37.454 21:24:15 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:37.455 21:24:15 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:37.455 21:24:15 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:37.455 21:24:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:37.455 21:24:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.455 21:24:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:37.455 21:24:15 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.455 21:24:15 -- common/autotest_common.sh@1320 -- # shift 00:28:37.455 21:24:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:37.455 21:24:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.455 21:24:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.455 21:24:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.455 21:24:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.455 21:24:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.455 21:24:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:37.455 21:24:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:38.030 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:38.030 fio-3.35 00:28:38.030 Starting 1 thread 00:28:38.030 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.576 00:28:40.576 test: (groupid=0, jobs=1): err= 0: pid=2535792: Sat Jun 8 21:24:18 2024 00:28:40.576 read: IOPS=9619, BW=37.6MiB/s (39.4MB/s)(75.3MiB/2005msec) 00:28:40.576 slat (usec): min=2, max=103, avg= 2.28, stdev= 1.09 00:28:40.576 clat (usec): min=3278, max=12895, avg=7532.16, stdev=1047.40 00:28:40.576 lat (usec): min=3291, max=12897, avg=7534.44, stdev=1047.37 00:28:40.576 clat percentiles (usec): 00:28:40.576 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6718], 00:28:40.576 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:28:40.576 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[ 9503], 00:28:40.576 | 99.00th=[10814], 99.50th=[11076], 99.90th=[11994], 99.95th=[12780], 00:28:40.576 | 99.99th=[12911] 00:28:40.576 bw ( KiB/s): min=37320, max=38840, per=99.84%, avg=38416.00, stdev=732.11, samples=4 00:28:40.576 iops : min= 9330, max= 9710, avg=9604.00, stdev=183.03, samples=4 00:28:40.576 write: IOPS=9618, BW=37.6MiB/s (39.4MB/s)(75.3MiB/2005msec); 0 zone resets 00:28:40.576 slat (nsec): min=2123, max=92881, avg=2393.76, stdev=850.20 00:28:40.576 clat (usec): min=1252, max=9292, avg=5699.85, stdev=740.69 00:28:40.576 lat (usec): min=1258, max=9295, avg=5702.25, stdev=740.74 00:28:40.576 clat percentiles (usec): 00:28:40.576 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5145], 00:28:40.576 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5932], 00:28:40.576 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6587], 95.00th=[ 6849], 00:28:40.576 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 8094], 99.95th=[ 8291], 00:28:40.576 | 99.99th=[ 9241] 00:28:40.576 bw ( KiB/s): min=38040, max=39168, per=100.00%, avg=38474.00, stdev=491.74, samples=4 00:28:40.576 iops : min= 9510, max= 9792, avg=9618.50, stdev=122.93, samples=4 00:28:40.576 lat (msec) : 2=0.01%, 4=0.98%, 10=97.68%, 20=1.34% 00:28:40.576 cpu : usr=69.81%, sys=24.95%, ctx=24, majf=0, minf=6 00:28:40.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:40.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.576 issued rwts: total=19287,19285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.576 00:28:40.576 Run status group 0 (all jobs): 00:28:40.576 READ: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.3MiB (79.0MB), run=2005-2005msec 00:28:40.576 WRITE: bw=37.6MiB/s (39.4MB/s), 37.6MiB/s-37.6MiB/s (39.4MB/s-39.4MB/s), io=75.3MiB (79.0MB), run=2005-2005msec 00:28:40.576 21:24:18 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:40.576 21:24:18 -- host/fio.sh@74 -- # sync 00:28:40.576 21:24:18 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:42.487 21:24:20 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:42.747 21:24:20 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:43.316 21:24:21 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:43.316 21:24:21 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:45.859 21:24:23 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:45.859 21:24:23 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:45.859 21:24:23 -- host/fio.sh@86 -- # nvmftestfini 00:28:45.859 21:24:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.859 21:24:23 -- nvmf/common.sh@116 -- # sync 00:28:45.859 21:24:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.859 21:24:23 -- nvmf/common.sh@119 -- # set +e 00:28:45.859 21:24:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.859 21:24:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.859 rmmod nvme_tcp 00:28:45.859 rmmod nvme_fabrics 00:28:45.859 rmmod nvme_keyring 00:28:45.859 21:24:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:45.859 21:24:23 -- nvmf/common.sh@123 -- # set -e 00:28:45.859 21:24:23 -- nvmf/common.sh@124 -- # return 0 00:28:45.859 21:24:23 -- nvmf/common.sh@477 -- # '[' -n 2531463 ']' 00:28:45.859 21:24:23 -- nvmf/common.sh@478 -- # killprocess 2531463 00:28:45.859 21:24:23 -- common/autotest_common.sh@926 -- # '[' -z 2531463 ']' 00:28:45.859 21:24:23 -- common/autotest_common.sh@930 -- # kill -0 2531463 00:28:45.859 21:24:23 -- common/autotest_common.sh@931 -- # uname 00:28:45.859 21:24:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.859 21:24:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2531463 00:28:45.859 21:24:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:45.859 21:24:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:45.859 21:24:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2531463' 00:28:45.859 killing process with pid 2531463 00:28:45.859 21:24:23 -- common/autotest_common.sh@945 -- # kill 2531463 00:28:45.859 21:24:23 -- common/autotest_common.sh@950 -- # wait 2531463 00:28:45.859 21:24:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:45.859 21:24:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:45.859 21:24:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:45.859 21:24:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.859 21:24:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:45.859 21:24:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.859 21:24:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.859 21:24:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.772 21:24:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:47.772 00:28:47.772 real 0m32.686s 00:28:47.772 user 2m40.884s 00:28:47.772 sys 0m9.603s 00:28:47.772 21:24:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.772 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:28:47.772 ************************************ 00:28:47.772 END TEST nvmf_fio_host 00:28:47.772 ************************************ 00:28:47.772 21:24:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:47.772 21:24:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:47.772 21:24:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.772 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:28:47.772 ************************************ 00:28:47.772 START TEST nvmf_failover 00:28:47.772 ************************************ 00:28:47.772 21:24:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:48.034 * Looking for test storage... 00:28:48.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.034 21:24:25 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.034 21:24:25 -- nvmf/common.sh@7 -- # uname -s 00:28:48.034 21:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.034 21:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.034 21:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.034 21:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.034 21:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.034 21:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.034 21:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.034 21:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.034 21:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.034 21:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.034 21:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.034 21:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.034 21:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.034 21:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.034 21:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.034 21:24:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.034 21:24:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.034 21:24:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.034 21:24:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.034 21:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.034 21:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.034 21:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.034 21:24:25 -- paths/export.sh@5 -- # export PATH 00:28:48.034 21:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.034 21:24:25 -- nvmf/common.sh@46 -- # : 0 00:28:48.034 21:24:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.034 21:24:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.034 21:24:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.034 21:24:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.034 21:24:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.034 21:24:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.034 21:24:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.034 21:24:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.034 21:24:25 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.034 21:24:25 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.034 21:24:25 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:48.034 21:24:25 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.034 21:24:25 -- host/failover.sh@18 -- # nvmftestinit 00:28:48.034 21:24:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:48.034 21:24:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.034 21:24:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.034 21:24:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.034 21:24:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.034 21:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.034 21:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.034 21:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.034 21:24:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.034 21:24:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.034 21:24:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.034 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:28:54.626 21:24:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:54.626 21:24:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:54.626 21:24:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:54.626 21:24:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:54.626 21:24:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:54.626 21:24:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:54.626 21:24:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:54.626 21:24:32 -- nvmf/common.sh@294 -- # net_devs=() 00:28:54.626 21:24:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:54.626 21:24:32 -- nvmf/common.sh@295 -- # e810=() 00:28:54.626 21:24:32 -- nvmf/common.sh@295 -- # local -ga e810 00:28:54.626 21:24:32 -- nvmf/common.sh@296 -- # x722=() 00:28:54.626 21:24:32 -- nvmf/common.sh@296 -- # local -ga x722 00:28:54.626 21:24:32 -- nvmf/common.sh@297 -- # mlx=() 00:28:54.626 21:24:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:54.626 21:24:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.626 21:24:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:54.626 21:24:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:54.626 21:24:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:54.626 21:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:54.626 21:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:54.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:54.626 21:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:54.626 21:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:54.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:54.626 21:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:54.626 21:24:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:54.626 21:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:54.626 21:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.626 21:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:54.626 21:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.626 21:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:54.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:54.626 21:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.626 21:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:54.626 21:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.626 21:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:54.626 21:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.626 21:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:54.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:54.626 21:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.626 21:24:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:54.626 21:24:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:54.627 21:24:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:54.627 21:24:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:54.627 21:24:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:54.627 21:24:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.627 21:24:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.627 21:24:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.627 21:24:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:54.627 21:24:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.627 21:24:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.627 21:24:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:54.627 21:24:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.627 21:24:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.627 21:24:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:54.627 21:24:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:54.627 21:24:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.627 21:24:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.627 21:24:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.627 21:24:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.627 21:24:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:54.888 21:24:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.888 21:24:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.888 21:24:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.888 21:24:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:54.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:28:54.888 00:28:54.888 --- 10.0.0.2 ping statistics --- 00:28:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.888 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:28:54.888 21:24:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:28:54.888 00:28:54.888 --- 10.0.0.1 ping statistics --- 00:28:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.888 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:28:54.888 21:24:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.888 21:24:32 -- nvmf/common.sh@410 -- # return 0 00:28:54.888 21:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:54.888 21:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.888 21:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:54.888 21:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:54.888 21:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.888 21:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:54.888 21:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:54.888 21:24:32 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:54.888 21:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:54.888 21:24:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:54.888 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:28:54.888 21:24:32 -- nvmf/common.sh@469 -- # nvmfpid=2541406 00:28:54.888 21:24:32 -- nvmf/common.sh@470 -- # waitforlisten 2541406 00:28:54.888 21:24:32 -- common/autotest_common.sh@819 -- # '[' -z 2541406 ']' 00:28:54.888 21:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.888 21:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:54.888 21:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.888 21:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:54.888 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:28:54.888 21:24:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:54.888 [2024-06-08 21:24:32.968994] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:54.888 [2024-06-08 21:24:32.969061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.156 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.156 [2024-06-08 21:24:33.054721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.157 [2024-06-08 21:24:33.145765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:55.157 [2024-06-08 21:24:33.145939] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.157 [2024-06-08 21:24:33.145951] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.157 [2024-06-08 21:24:33.145958] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.157 [2024-06-08 21:24:33.146113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.157 [2024-06-08 21:24:33.146282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.157 [2024-06-08 21:24:33.146283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.797 21:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:55.797 21:24:33 -- common/autotest_common.sh@852 -- # return 0 00:28:55.797 21:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:55.797 21:24:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:55.797 21:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:55.797 21:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.797 21:24:33 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:56.057 [2024-06-08 21:24:33.919941] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.057 21:24:33 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:56.057 Malloc0 00:28:56.057 21:24:34 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.318 21:24:34 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.579 21:24:34 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.579 [2024-06-08 21:24:34.605968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.579 21:24:34 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:56.840 [2024-06-08 21:24:34.770383] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:56.840 21:24:34 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:57.101 [2024-06-08 21:24:34.934955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:57.101 21:24:34 -- host/failover.sh@31 -- # bdevperf_pid=2541842 00:28:57.101 21:24:34 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:57.101 21:24:34 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:57.101 21:24:34 -- host/failover.sh@34 -- # waitforlisten 2541842 /var/tmp/bdevperf.sock 00:28:57.101 21:24:34 -- common/autotest_common.sh@819 -- # '[' -z 2541842 ']' 00:28:57.101 21:24:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.101 21:24:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:57.101 21:24:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.101 21:24:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:57.101 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:58.044 21:24:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:58.044 21:24:35 -- common/autotest_common.sh@852 -- # return 0 00:28:58.044 21:24:35 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:58.044 NVMe0n1 00:28:58.044 21:24:36 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:58.616 00:28:58.616 21:24:36 -- host/failover.sh@39 -- # run_test_pid=2542163 00:28:58.616 21:24:36 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:58.616 21:24:36 -- host/failover.sh@41 -- # sleep 1 00:28:59.558 21:24:37 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.558 [2024-06-08 21:24:37.624656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.558 [2024-06-08 21:24:37.624695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.558 [2024-06-08 21:24:37.624701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.558 [2024-06-08 21:24:37.624705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.558 [2024-06-08 21:24:37.624710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.558 [2024-06-08 21:24:37.624715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.559 [2024-06-08 21:24:37.624992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc806e0 is same with the state(5) to be set 00:28:59.829 21:24:37 -- host/failover.sh@45 -- # sleep 3 00:29:03.134 21:24:40 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.134 00:29:03.134 21:24:40 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:03.134 [2024-06-08 21:24:41.078533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 [2024-06-08 21:24:41.078874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80ef0 is same with the state(5) to be set 00:29:03.134 21:24:41 -- host/failover.sh@50 -- # sleep 3 00:29:06.434 21:24:44 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.434 [2024-06-08 21:24:44.247936] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.434 21:24:44 -- host/failover.sh@55 -- # sleep 1 00:29:07.376 21:24:45 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:07.376 [2024-06-08 21:24:45.405906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.376 [2024-06-08 21:24:45.405994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.405999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 [2024-06-08 21:24:45.406291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b6f0 is same with the state(5) to be set 00:29:07.377 21:24:45 -- host/failover.sh@59 -- # wait 2542163 00:29:13.971 0 00:29:13.971 21:24:51 -- host/failover.sh@61 -- # killprocess 2541842 00:29:13.971 21:24:51 -- common/autotest_common.sh@926 -- # '[' -z 2541842 ']' 00:29:13.971 21:24:51 -- common/autotest_common.sh@930 -- # kill -0 2541842 00:29:13.971 21:24:51 -- common/autotest_common.sh@931 -- # uname 00:29:13.971 21:24:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.971 21:24:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2541842 00:29:13.971 21:24:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.971 21:24:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.971 21:24:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2541842' 00:29:13.971 killing process with pid 2541842 00:29:13.971 21:24:51 -- common/autotest_common.sh@945 -- # kill 2541842 00:29:13.971 21:24:51 -- common/autotest_common.sh@950 -- # wait 2541842 00:29:13.971 21:24:51 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:13.971 [2024-06-08 21:24:35.007065] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:13.971 [2024-06-08 21:24:35.007120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541842 ] 00:29:13.971 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.971 [2024-06-08 21:24:35.065553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.971 [2024-06-08 21:24:35.127504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.971 Running I/O for 15 seconds... 00:29:13.971 [2024-06-08 21:24:37.625688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.971 [2024-06-08 21:24:37.625825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.971 [2024-06-08 21:24:37.625832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.625950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.625983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.625992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.625999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.626015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.972 [2024-06-08 21:24:37.626327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.626344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.626360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.626376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.972 [2024-06-08 21:24:37.626385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.972 [2024-06-08 21:24:37.626392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.973 [2024-06-08 21:24:37.626850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.973 [2024-06-08 21:24:37.626923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.973 [2024-06-08 21:24:37.626930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.626940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.626948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.626957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.626964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.626972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.626979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.626989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.626996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.974 [2024-06-08 21:24:37.627454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.974 [2024-06-08 21:24:37.627469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.974 [2024-06-08 21:24:37.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.975 [2024-06-08 21:24:37.627648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.975 [2024-06-08 21:24:37.627666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.975 [2024-06-08 21:24:37.627701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.975 [2024-06-08 21:24:37.627717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.975 [2024-06-08 21:24:37.627786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:37.627802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.975 [2024-06-08 21:24:37.627831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.975 [2024-06-08 21:24:37.627838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38600 len:8 PRP1 0x0 PRP2 0x0 00:29:13.975 [2024-06-08 21:24:37.627847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627884] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x118c760 was disconnected and freed. reset controller. 00:29:13.975 [2024-06-08 21:24:37.627898] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:13.975 [2024-06-08 21:24:37.627916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.975 [2024-06-08 21:24:37.627925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.975 [2024-06-08 21:24:37.627941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.975 [2024-06-08 21:24:37.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.975 [2024-06-08 21:24:37.627970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:37.627978] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.975 [2024-06-08 21:24:37.630163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.975 [2024-06-08 21:24:37.630184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c130 (9): Bad file descriptor 00:29:13.975 [2024-06-08 21:24:37.699140] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.975 [2024-06-08 21:24:41.079185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:41.079220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:41.079236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:41.079244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:41.079259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.975 [2024-06-08 21:24:41.079267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.975 [2024-06-08 21:24:41.079276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.976 [2024-06-08 21:24:41.079696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.976 [2024-06-08 21:24:41.079728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.976 [2024-06-08 21:24:41.079761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.976 [2024-06-08 21:24:41.079770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.079796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.079813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.079880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.079991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.079998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.080015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.080031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.080098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.977 [2024-06-08 21:24:41.080148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.977 [2024-06-08 21:24:41.080165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.977 [2024-06-08 21:24:41.080174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.978 [2024-06-08 21:24:41.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.978 [2024-06-08 21:24:41.080739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.978 [2024-06-08 21:24:41.080750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.080985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.080992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.979 [2024-06-08 21:24:41.081275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.979 [2024-06-08 21:24:41.081302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.979 [2024-06-08 21:24:41.081309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.980 [2024-06-08 21:24:41.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:41.081345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:41.081361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.980 [2024-06-08 21:24:41.081388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.980 [2024-06-08 21:24:41.081394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:29:13.980 [2024-06-08 21:24:41.081408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081446] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x118e6e0 was disconnected and freed. reset controller. 00:29:13.980 [2024-06-08 21:24:41.081456] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:13.980 [2024-06-08 21:24:41.081474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:41.081483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:41.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:41.081514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:41.081529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:41.081537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.980 [2024-06-08 21:24:41.083784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.980 [2024-06-08 21:24:41.083806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c130 (9): Bad file descriptor 00:29:13.980 [2024-06-08 21:24:41.151978] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.980 [2024-06-08 21:24:45.406337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:45.406377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:45.406395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:45.406417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.980 [2024-06-08 21:24:45.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117c130 is same with the state(5) to be set 00:29:13.980 [2024-06-08 21:24:45.406834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.406987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.406994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.980 [2024-06-08 21:24:45.407123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.980 [2024-06-08 21:24:45.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.981 [2024-06-08 21:24:45.407599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.981 [2024-06-08 21:24:45.407641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.981 [2024-06-08 21:24:45.407648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.407780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.407879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.407979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.407988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.407995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.982 [2024-06-08 21:24:45.408011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.982 [2024-06-08 21:24:45.408109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.982 [2024-06-08 21:24:45.408119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.983 [2024-06-08 21:24:45.408594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.983 [2024-06-08 21:24:45.408611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.983 [2024-06-08 21:24:45.408620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.984 [2024-06-08 21:24:45.408922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.984 [2024-06-08 21:24:45.408955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.408976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.984 [2024-06-08 21:24:45.408982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.984 [2024-06-08 21:24:45.408991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:29:13.984 [2024-06-08 21:24:45.408998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.984 [2024-06-08 21:24:45.409036] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1192580 was disconnected and freed. reset controller. 00:29:13.984 [2024-06-08 21:24:45.409045] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:13.984 [2024-06-08 21:24:45.409053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.984 [2024-06-08 21:24:45.411471] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.984 [2024-06-08 21:24:45.411495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c130 (9): Bad file descriptor 00:29:13.984 [2024-06-08 21:24:45.558186] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.984 00:29:13.984 Latency(us) 00:29:13.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.984 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:13.984 Verification LBA range: start 0x0 length 0x4000 00:29:13.984 NVMe0n1 : 15.00 19652.37 76.77 973.98 0.00 6189.59 1078.61 15619.41 00:29:13.984 =================================================================================================================== 00:29:13.984 Total : 19652.37 76.77 973.98 0.00 6189.59 1078.61 15619.41 00:29:13.984 Received shutdown signal, test time was about 15.000000 seconds 00:29:13.984 00:29:13.984 Latency(us) 00:29:13.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.984 =================================================================================================================== 00:29:13.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.984 21:24:51 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:13.984 21:24:51 -- host/failover.sh@65 -- # count=3 00:29:13.984 21:24:51 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:13.984 21:24:51 -- host/failover.sh@73 -- # bdevperf_pid=2545020 00:29:13.984 21:24:51 -- host/failover.sh@75 -- # waitforlisten 2545020 /var/tmp/bdevperf.sock 00:29:13.985 21:24:51 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:13.985 21:24:51 -- common/autotest_common.sh@819 -- # '[' -z 2545020 ']' 00:29:13.985 21:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.985 21:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.985 21:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.985 21:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.985 21:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.556 21:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:14.556 21:24:52 -- common/autotest_common.sh@852 -- # return 0 00:29:14.556 21:24:52 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:14.816 [2024-06-08 21:24:52.749244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.816 21:24:52 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:14.816 [2024-06-08 21:24:52.905610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:15.076 21:24:52 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:15.335 NVMe0n1 00:29:15.335 21:24:53 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:15.595 00:29:15.595 21:24:53 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:15.855 00:29:15.855 21:24:53 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:15.855 21:24:53 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:16.115 21:24:53 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:16.115 21:24:54 -- host/failover.sh@87 -- # sleep 3 00:29:19.414 21:24:57 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.414 21:24:57 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:19.414 21:24:57 -- host/failover.sh@90 -- # run_test_pid=2546168 00:29:19.414 21:24:57 -- host/failover.sh@92 -- # wait 2546168 00:29:19.414 21:24:57 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:20.356 0 00:29:20.356 21:24:58 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:20.356 [2024-06-08 21:24:51.850610] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:20.356 [2024-06-08 21:24:51.850669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545020 ] 00:29:20.356 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.356 [2024-06-08 21:24:51.909630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.356 [2024-06-08 21:24:51.971745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.356 [2024-06-08 21:24:54.118671] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:20.356 [2024-06-08 21:24:54.118715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.356 [2024-06-08 21:24:54.118726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.356 [2024-06-08 21:24:54.118735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.356 [2024-06-08 21:24:54.118743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.356 [2024-06-08 21:24:54.118751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.356 [2024-06-08 21:24:54.118758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.356 [2024-06-08 21:24:54.118766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.356 [2024-06-08 21:24:54.118773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.356 [2024-06-08 21:24:54.118780] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.356 [2024-06-08 21:24:54.118804] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.356 [2024-06-08 21:24:54.118818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b90130 (9): Bad file descriptor 00:29:20.356 [2024-06-08 21:24:54.172220] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:20.356 Running I/O for 1 seconds... 00:29:20.356 00:29:20.356 Latency(us) 00:29:20.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.356 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:20.356 Verification LBA range: start 0x0 length 0x4000 00:29:20.356 NVMe0n1 : 1.00 19943.35 77.90 0.00 0.00 6387.47 1187.84 11741.87 00:29:20.356 =================================================================================================================== 00:29:20.356 Total : 19943.35 77.90 0.00 0.00 6387.47 1187.84 11741.87 00:29:20.356 21:24:58 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.356 21:24:58 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:20.617 21:24:58 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.878 21:24:58 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.878 21:24:58 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:20.878 21:24:58 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:21.138 21:24:59 -- host/failover.sh@101 -- # sleep 3 00:29:24.490 21:25:02 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.490 21:25:02 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:24.490 21:25:02 -- host/failover.sh@108 -- # killprocess 2545020 00:29:24.490 21:25:02 -- common/autotest_common.sh@926 -- # '[' -z 2545020 ']' 00:29:24.490 21:25:02 -- common/autotest_common.sh@930 -- # kill -0 2545020 00:29:24.490 21:25:02 -- common/autotest_common.sh@931 -- # uname 00:29:24.490 21:25:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.491 21:25:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2545020 00:29:24.491 21:25:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:24.491 21:25:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:24.491 21:25:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2545020' 00:29:24.491 killing process with pid 2545020 00:29:24.491 21:25:02 -- common/autotest_common.sh@945 -- # kill 2545020 00:29:24.491 21:25:02 -- common/autotest_common.sh@950 -- # wait 2545020 00:29:24.491 21:25:02 -- host/failover.sh@110 -- # sync 00:29:24.491 21:25:02 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.752 21:25:02 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:24.752 21:25:02 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.752 21:25:02 -- host/failover.sh@116 -- # nvmftestfini 00:29:24.752 21:25:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:24.752 21:25:02 -- nvmf/common.sh@116 -- # sync 00:29:24.752 21:25:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:24.752 21:25:02 -- nvmf/common.sh@119 -- # set +e 00:29:24.752 21:25:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:24.752 21:25:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:24.752 rmmod nvme_tcp 00:29:24.752 rmmod nvme_fabrics 00:29:24.752 rmmod nvme_keyring 00:29:24.752 21:25:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:24.752 21:25:02 -- nvmf/common.sh@123 -- # set -e 00:29:24.752 21:25:02 -- nvmf/common.sh@124 -- # return 0 00:29:24.752 21:25:02 -- nvmf/common.sh@477 -- # '[' -n 2541406 ']' 00:29:24.752 21:25:02 -- nvmf/common.sh@478 -- # killprocess 2541406 00:29:24.752 21:25:02 -- common/autotest_common.sh@926 -- # '[' -z 2541406 ']' 00:29:24.752 21:25:02 -- common/autotest_common.sh@930 -- # kill -0 2541406 00:29:24.752 21:25:02 -- common/autotest_common.sh@931 -- # uname 00:29:24.752 21:25:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.752 21:25:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2541406 00:29:24.752 21:25:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:24.752 21:25:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:24.752 21:25:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2541406' 00:29:24.752 killing process with pid 2541406 00:29:24.752 21:25:02 -- common/autotest_common.sh@945 -- # kill 2541406 00:29:24.752 21:25:02 -- common/autotest_common.sh@950 -- # wait 2541406 00:29:25.013 21:25:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.013 21:25:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.013 21:25:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.013 21:25:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.013 21:25:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.013 21:25:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.013 21:25:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.013 21:25:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.928 21:25:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:26.928 00:29:26.928 real 0m39.159s 00:29:26.928 user 2m1.420s 00:29:26.928 sys 0m7.850s 00:29:26.928 21:25:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.928 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:26.928 ************************************ 00:29:26.928 END TEST nvmf_failover 00:29:26.928 ************************************ 00:29:26.928 21:25:04 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:26.928 21:25:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:26.928 21:25:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.928 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:29:26.928 ************************************ 00:29:26.928 START TEST nvmf_discovery 00:29:26.928 ************************************ 00:29:26.928 21:25:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:27.189 * Looking for test storage... 00:29:27.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.189 21:25:05 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.189 21:25:05 -- nvmf/common.sh@7 -- # uname -s 00:29:27.189 21:25:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.189 21:25:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.189 21:25:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.189 21:25:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.189 21:25:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.189 21:25:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.189 21:25:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.189 21:25:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.189 21:25:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.189 21:25:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.189 21:25:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:27.189 21:25:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:27.189 21:25:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.190 21:25:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.190 21:25:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.190 21:25:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.190 21:25:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.190 21:25:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.190 21:25:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.190 21:25:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.190 21:25:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.190 21:25:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.190 21:25:05 -- paths/export.sh@5 -- # export PATH 00:29:27.190 21:25:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.190 21:25:05 -- nvmf/common.sh@46 -- # : 0 00:29:27.190 21:25:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:27.190 21:25:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:27.190 21:25:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:27.190 21:25:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.190 21:25:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.190 21:25:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:27.190 21:25:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:27.190 21:25:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:27.190 21:25:05 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:27.190 21:25:05 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:27.190 21:25:05 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:27.190 21:25:05 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:27.190 21:25:05 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:27.190 21:25:05 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:27.190 21:25:05 -- host/discovery.sh@25 -- # nvmftestinit 00:29:27.190 21:25:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:27.190 21:25:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.190 21:25:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:27.190 21:25:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:27.190 21:25:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:27.190 21:25:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.190 21:25:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.190 21:25:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.190 21:25:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:27.190 21:25:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:27.190 21:25:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:27.190 21:25:05 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 21:25:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:35.334 21:25:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:35.334 21:25:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:35.334 21:25:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:35.334 21:25:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:35.334 21:25:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:35.334 21:25:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:35.334 21:25:11 -- nvmf/common.sh@294 -- # net_devs=() 00:29:35.334 21:25:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:35.334 21:25:11 -- nvmf/common.sh@295 -- # e810=() 00:29:35.334 21:25:11 -- nvmf/common.sh@295 -- # local -ga e810 00:29:35.334 21:25:11 -- nvmf/common.sh@296 -- # x722=() 00:29:35.334 21:25:11 -- nvmf/common.sh@296 -- # local -ga x722 00:29:35.334 21:25:11 -- nvmf/common.sh@297 -- # mlx=() 00:29:35.334 21:25:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:35.334 21:25:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.334 21:25:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:35.334 21:25:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:35.334 21:25:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.334 21:25:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:35.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:35.334 21:25:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.334 21:25:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:35.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:35.334 21:25:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.334 21:25:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.334 21:25:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.334 21:25:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:35.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:35.334 21:25:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.334 21:25:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.334 21:25:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.334 21:25:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.334 21:25:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:35.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:35.334 21:25:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.334 21:25:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:35.334 21:25:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:35.334 21:25:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:35.334 21:25:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.334 21:25:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.334 21:25:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.334 21:25:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:35.334 21:25:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.334 21:25:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.334 21:25:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:35.334 21:25:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.334 21:25:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.334 21:25:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:35.334 21:25:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:35.334 21:25:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.334 21:25:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.334 21:25:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.334 21:25:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.334 21:25:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:35.334 21:25:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.334 21:25:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.334 21:25:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.334 21:25:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:35.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:29:35.334 00:29:35.334 --- 10.0.0.2 ping statistics --- 00:29:35.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.334 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:29:35.334 21:25:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:29:35.334 00:29:35.334 --- 10.0.0.1 ping statistics --- 00:29:35.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.334 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:29:35.334 21:25:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.334 21:25:12 -- nvmf/common.sh@410 -- # return 0 00:29:35.334 21:25:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:35.334 21:25:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.334 21:25:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:35.334 21:25:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:35.334 21:25:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.334 21:25:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:35.334 21:25:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:35.334 21:25:12 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:35.334 21:25:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:35.334 21:25:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:35.334 21:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 21:25:12 -- nvmf/common.sh@469 -- # nvmfpid=2551296 00:29:35.334 21:25:12 -- nvmf/common.sh@470 -- # waitforlisten 2551296 00:29:35.334 21:25:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:35.334 21:25:12 -- common/autotest_common.sh@819 -- # '[' -z 2551296 ']' 00:29:35.334 21:25:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.334 21:25:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.334 21:25:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.334 21:25:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.334 21:25:12 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 [2024-06-08 21:25:12.332363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:35.334 [2024-06-08 21:25:12.332431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.334 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.334 [2024-06-08 21:25:12.417858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.334 [2024-06-08 21:25:12.507792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:35.334 [2024-06-08 21:25:12.507940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.334 [2024-06-08 21:25:12.507950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.334 [2024-06-08 21:25:12.507959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.334 [2024-06-08 21:25:12.507984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.334 21:25:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.334 21:25:13 -- common/autotest_common.sh@852 -- # return 0 00:29:35.334 21:25:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:35.334 21:25:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 21:25:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.334 21:25:13 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.334 21:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 [2024-06-08 21:25:13.165616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.334 21:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.334 21:25:13 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:35.334 21:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 [2024-06-08 21:25:13.177828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:35.334 21:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.334 21:25:13 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:35.334 21:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 null0 00:29:35.334 21:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.334 21:25:13 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:35.334 21:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 null1 00:29:35.334 21:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.334 21:25:13 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:35.334 21:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.334 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.334 21:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.334 21:25:13 -- host/discovery.sh@45 -- # hostpid=2551497 00:29:35.334 21:25:13 -- host/discovery.sh@46 -- # waitforlisten 2551497 /tmp/host.sock 00:29:35.334 21:25:13 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:35.334 21:25:13 -- common/autotest_common.sh@819 -- # '[' -z 2551497 ']' 00:29:35.334 21:25:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:35.335 21:25:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.335 21:25:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:35.335 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:35.335 21:25:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.335 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.335 [2024-06-08 21:25:13.267540] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:35.335 [2024-06-08 21:25:13.267602] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551497 ] 00:29:35.335 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.335 [2024-06-08 21:25:13.330705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.335 [2024-06-08 21:25:13.402984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:35.335 [2024-06-08 21:25:13.403119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.276 21:25:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:36.276 21:25:14 -- common/autotest_common.sh@852 -- # return 0 00:29:36.276 21:25:14 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.276 21:25:14 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@72 -- # notify_id=0 00:29:36.276 21:25:14 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # sort 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # xargs 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:36.276 21:25:14 -- host/discovery.sh@79 -- # get_bdev_list 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # sort 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # xargs 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:36.276 21:25:14 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # sort 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # xargs 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:36.276 21:25:14 -- host/discovery.sh@83 -- # get_bdev_list 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # sort 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # xargs 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:36.276 21:25:14 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # sort 00:29:36.276 21:25:14 -- host/discovery.sh@59 -- # xargs 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.276 21:25:14 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:36.276 21:25:14 -- host/discovery.sh@87 -- # get_bdev_list 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # xargs 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.276 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.276 21:25:14 -- host/discovery.sh@55 -- # sort 00:29:36.276 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.537 21:25:14 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:36.537 21:25:14 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:36.537 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.537 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.537 [2024-06-08 21:25:14.396965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.537 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.537 21:25:14 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:36.537 21:25:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.537 21:25:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.537 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.537 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.537 21:25:14 -- host/discovery.sh@59 -- # sort 00:29:36.537 21:25:14 -- host/discovery.sh@59 -- # xargs 00:29:36.537 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.537 21:25:14 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:36.537 21:25:14 -- host/discovery.sh@93 -- # get_bdev_list 00:29:36.537 21:25:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.537 21:25:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.537 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.537 21:25:14 -- host/discovery.sh@55 -- # sort 00:29:36.537 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.537 21:25:14 -- host/discovery.sh@55 -- # xargs 00:29:36.537 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.537 21:25:14 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:36.537 21:25:14 -- host/discovery.sh@94 -- # get_notification_count 00:29:36.537 21:25:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:36.537 21:25:14 -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.537 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.537 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.537 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.537 21:25:14 -- host/discovery.sh@74 -- # notification_count=0 00:29:36.537 21:25:14 -- host/discovery.sh@75 -- # notify_id=0 00:29:36.537 21:25:14 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:36.538 21:25:14 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:36.538 21:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.538 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:29:36.538 21:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.538 21:25:14 -- host/discovery.sh@100 -- # sleep 1 00:29:37.107 [2024-06-08 21:25:15.094546] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:37.107 [2024-06-08 21:25:15.094566] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:37.107 [2024-06-08 21:25:15.094581] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.107 [2024-06-08 21:25:15.182862] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:37.367 [2024-06-08 21:25:15.408915] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:37.368 [2024-06-08 21:25:15.408938] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:37.628 21:25:15 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:37.628 21:25:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.628 21:25:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.628 21:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.628 21:25:15 -- host/discovery.sh@59 -- # sort 00:29:37.628 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:37.628 21:25:15 -- host/discovery.sh@59 -- # xargs 00:29:37.628 21:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.628 21:25:15 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.628 21:25:15 -- host/discovery.sh@102 -- # get_bdev_list 00:29:37.628 21:25:15 -- host/discovery.sh@55 -- # xargs 00:29:37.628 21:25:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.628 21:25:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.628 21:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.628 21:25:15 -- host/discovery.sh@55 -- # sort 00:29:37.628 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:37.628 21:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.628 21:25:15 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:37.628 21:25:15 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:37.628 21:25:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:37.628 21:25:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:37.628 21:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.628 21:25:15 -- host/discovery.sh@63 -- # sort -n 00:29:37.628 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:37.628 21:25:15 -- host/discovery.sh@63 -- # xargs 00:29:37.628 21:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.889 21:25:15 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:37.889 21:25:15 -- host/discovery.sh@104 -- # get_notification_count 00:29:37.889 21:25:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:37.889 21:25:15 -- host/discovery.sh@74 -- # jq '. | length' 00:29:37.889 21:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.889 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:37.889 21:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.889 21:25:15 -- host/discovery.sh@74 -- # notification_count=1 00:29:37.889 21:25:15 -- host/discovery.sh@75 -- # notify_id=1 00:29:37.889 21:25:15 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:37.889 21:25:15 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:37.889 21:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.889 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:29:37.889 21:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.889 21:25:15 -- host/discovery.sh@109 -- # sleep 1 00:29:38.829 21:25:16 -- host/discovery.sh@110 -- # get_bdev_list 00:29:38.829 21:25:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.829 21:25:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:38.829 21:25:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.829 21:25:16 -- host/discovery.sh@55 -- # sort 00:29:38.829 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:38.829 21:25:16 -- host/discovery.sh@55 -- # xargs 00:29:38.829 21:25:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.829 21:25:16 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:38.829 21:25:16 -- host/discovery.sh@111 -- # get_notification_count 00:29:38.829 21:25:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:38.829 21:25:16 -- host/discovery.sh@74 -- # jq '. | length' 00:29:38.829 21:25:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.829 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:38.829 21:25:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.829 21:25:16 -- host/discovery.sh@74 -- # notification_count=1 00:29:38.829 21:25:16 -- host/discovery.sh@75 -- # notify_id=2 00:29:38.829 21:25:16 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:38.829 21:25:16 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:38.829 21:25:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.829 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:29:38.829 [2024-06-08 21:25:16.895537] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:38.829 [2024-06-08 21:25:16.895944] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:38.829 [2024-06-08 21:25:16.895971] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:38.829 21:25:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.829 21:25:16 -- host/discovery.sh@117 -- # sleep 1 00:29:39.089 [2024-06-08 21:25:16.984237] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:39.089 [2024-06-08 21:25:17.082072] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:39.089 [2024-06-08 21:25:17.082094] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:39.089 [2024-06-08 21:25:17.082100] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:40.032 21:25:17 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:40.032 21:25:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.032 21:25:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:40.032 21:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.032 21:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.032 21:25:17 -- host/discovery.sh@59 -- # sort 00:29:40.032 21:25:17 -- host/discovery.sh@59 -- # xargs 00:29:40.032 21:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:17 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.032 21:25:17 -- host/discovery.sh@119 -- # get_bdev_list 00:29:40.032 21:25:17 -- host/discovery.sh@55 -- # sort 00:29:40.032 21:25:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.032 21:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.032 21:25:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.032 21:25:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.032 21:25:17 -- host/discovery.sh@55 -- # xargs 00:29:40.032 21:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:40.032 21:25:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:40.032 21:25:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:40.032 21:25:18 -- host/discovery.sh@63 -- # xargs 00:29:40.032 21:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.032 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:40.032 21:25:18 -- host/discovery.sh@63 -- # sort -n 00:29:40.032 21:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@121 -- # get_notification_count 00:29:40.032 21:25:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:40.032 21:25:18 -- host/discovery.sh@74 -- # jq '. | length' 00:29:40.032 21:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.032 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:40.032 21:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@74 -- # notification_count=0 00:29:40.032 21:25:18 -- host/discovery.sh@75 -- # notify_id=2 00:29:40.032 21:25:18 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.032 21:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.032 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:29:40.032 [2024-06-08 21:25:18.095130] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:40.032 [2024-06-08 21:25:18.095153] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:40.032 21:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.032 21:25:18 -- host/discovery.sh@127 -- # sleep 1 00:29:40.032 [2024-06-08 21:25:18.102724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:40.032 [2024-06-08 21:25:18.102743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.032 [2024-06-08 21:25:18.102753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:40.032 [2024-06-08 21:25:18.102760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.032 [2024-06-08 21:25:18.102768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:40.032 [2024-06-08 21:25:18.102777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.032 [2024-06-08 21:25:18.102785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:40.032 [2024-06-08 21:25:18.102792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.032 [2024-06-08 21:25:18.102799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.032 [2024-06-08 21:25:18.112740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.032 [2024-06-08 21:25:18.122782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.294 [2024-06-08 21:25:18.123268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-06-08 21:25:18.123893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-06-08 21:25:18.123931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.294 [2024-06-08 21:25:18.123942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.294 [2024-06-08 21:25:18.123961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.294 [2024-06-08 21:25:18.123989] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.294 [2024-06-08 21:25:18.123998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.294 [2024-06-08 21:25:18.124006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.294 [2024-06-08 21:25:18.124021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.294 [2024-06-08 21:25:18.132837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.294 [2024-06-08 21:25:18.133327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-06-08 21:25:18.133852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.294 [2024-06-08 21:25:18.133890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.133901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.133919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.133944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.133952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.133960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.133975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.295 [2024-06-08 21:25:18.142894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.295 [2024-06-08 21:25:18.143382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.143935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.143973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.143984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.144002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.144045] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.144054] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.144063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.144078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.295 [2024-06-08 21:25:18.152952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.295 [2024-06-08 21:25:18.153397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.153917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.153956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.153966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.153984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.154009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.154017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.154025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.154040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.295 [2024-06-08 21:25:18.163008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.295 [2024-06-08 21:25:18.163587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.164008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.164023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.164032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.164051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.164077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.164085] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.164094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.164109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.295 [2024-06-08 21:25:18.173068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.295 [2024-06-08 21:25:18.173640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.174131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.174144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.174154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.174185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.174203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.174210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.174218] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.174233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.295 [2024-06-08 21:25:18.183123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:40.295 [2024-06-08 21:25:18.183715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.183989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.295 [2024-06-08 21:25:18.184003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3bf10 with addr=10.0.0.2, port=4420 00:29:40.295 [2024-06-08 21:25:18.184013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3bf10 is same with the state(5) to be set 00:29:40.295 [2024-06-08 21:25:18.184031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3bf10 (9): Bad file descriptor 00:29:40.295 [2024-06-08 21:25:18.184071] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:40.295 [2024-06-08 21:25:18.184088] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:40.295 [2024-06-08 21:25:18.184113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:40.295 [2024-06-08 21:25:18.184124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:40.295 [2024-06-08 21:25:18.184131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:40.295 [2024-06-08 21:25:18.184146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.239 21:25:19 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:41.239 21:25:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.239 21:25:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.239 21:25:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:41.239 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 21:25:19 -- host/discovery.sh@59 -- # sort 00:29:41.239 21:25:19 -- host/discovery.sh@59 -- # xargs 00:29:41.239 21:25:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@129 -- # get_bdev_list 00:29:41.239 21:25:19 -- host/discovery.sh@55 -- # sort 00:29:41.239 21:25:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.239 21:25:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:41.239 21:25:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.239 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 21:25:19 -- host/discovery.sh@55 -- # xargs 00:29:41.239 21:25:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:41.239 21:25:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:41.239 21:25:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.239 21:25:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.239 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 21:25:19 -- host/discovery.sh@63 -- # sort -n 00:29:41.239 21:25:19 -- host/discovery.sh@63 -- # xargs 00:29:41.239 21:25:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@131 -- # get_notification_count 00:29:41.239 21:25:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:41.239 21:25:19 -- host/discovery.sh@74 -- # jq '. | length' 00:29:41.239 21:25:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.239 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 21:25:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@74 -- # notification_count=0 00:29:41.239 21:25:19 -- host/discovery.sh@75 -- # notify_id=2 00:29:41.239 21:25:19 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:41.239 21:25:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.239 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:29:41.239 21:25:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.239 21:25:19 -- host/discovery.sh@135 -- # sleep 1 00:29:42.625 21:25:20 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:42.625 21:25:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:42.625 21:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.625 21:25:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:42.625 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:42.625 21:25:20 -- host/discovery.sh@59 -- # sort 00:29:42.625 21:25:20 -- host/discovery.sh@59 -- # xargs 00:29:42.625 21:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.625 21:25:20 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:42.625 21:25:20 -- host/discovery.sh@137 -- # get_bdev_list 00:29:42.625 21:25:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.625 21:25:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:42.625 21:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.625 21:25:20 -- host/discovery.sh@55 -- # sort 00:29:42.625 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:42.625 21:25:20 -- host/discovery.sh@55 -- # xargs 00:29:42.625 21:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.625 21:25:20 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:42.625 21:25:20 -- host/discovery.sh@138 -- # get_notification_count 00:29:42.625 21:25:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:42.625 21:25:20 -- host/discovery.sh@74 -- # jq '. | length' 00:29:42.625 21:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.625 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:42.625 21:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.625 21:25:20 -- host/discovery.sh@74 -- # notification_count=2 00:29:42.625 21:25:20 -- host/discovery.sh@75 -- # notify_id=4 00:29:42.625 21:25:20 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:42.625 21:25:20 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:42.625 21:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.625 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:29:43.567 [2024-06-08 21:25:21.552628] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:43.567 [2024-06-08 21:25:21.552647] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:43.567 [2024-06-08 21:25:21.552661] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:43.567 [2024-06-08 21:25:21.642957] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:44.139 [2024-06-08 21:25:21.951871] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:44.139 [2024-06-08 21:25:21.951903] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:44.139 21:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.139 21:25:21 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:21 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.139 21:25:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:44.139 21:25:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:44.139 21:25:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 request: 00:29:44.139 { 00:29:44.139 "name": "nvme", 00:29:44.139 "trtype": "tcp", 00:29:44.139 "traddr": "10.0.0.2", 00:29:44.139 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:44.139 "adrfam": "ipv4", 00:29:44.139 "trsvcid": "8009", 00:29:44.139 "wait_for_attach": true, 00:29:44.139 "method": "bdev_nvme_start_discovery", 00:29:44.139 "req_id": 1 00:29:44.139 } 00:29:44.139 Got JSON-RPC error response 00:29:44.139 response: 00:29:44.139 { 00:29:44.139 "code": -17, 00:29:44.139 "message": "File exists" 00:29:44.139 } 00:29:44.139 21:25:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:44.139 21:25:21 -- common/autotest_common.sh@643 -- # es=1 00:29:44.139 21:25:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.139 21:25:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.139 21:25:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.139 21:25:21 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:44.139 21:25:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:44.139 21:25:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:44.139 21:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:21 -- host/discovery.sh@67 -- # sort 00:29:44.139 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 21:25:21 -- host/discovery.sh@67 -- # xargs 00:29:44.139 21:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:44.139 21:25:22 -- host/discovery.sh@147 -- # get_bdev_list 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # sort 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # xargs 00:29:44.139 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 21:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:22 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.139 21:25:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:44.139 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 request: 00:29:44.139 { 00:29:44.139 "name": "nvme_second", 00:29:44.139 "trtype": "tcp", 00:29:44.139 "traddr": "10.0.0.2", 00:29:44.139 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:44.139 "adrfam": "ipv4", 00:29:44.139 "trsvcid": "8009", 00:29:44.139 "wait_for_attach": true, 00:29:44.139 "method": "bdev_nvme_start_discovery", 00:29:44.139 "req_id": 1 00:29:44.139 } 00:29:44.139 Got JSON-RPC error response 00:29:44.139 response: 00:29:44.139 { 00:29:44.139 "code": -17, 00:29:44.139 "message": "File exists" 00:29:44.139 } 00:29:44.139 21:25:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:44.139 21:25:22 -- common/autotest_common.sh@643 -- # es=1 00:29:44.139 21:25:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.139 21:25:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.139 21:25:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.139 21:25:22 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:44.139 21:25:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:44.139 21:25:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:44.139 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 21:25:22 -- host/discovery.sh@67 -- # sort 00:29:44.139 21:25:22 -- host/discovery.sh@67 -- # xargs 00:29:44.139 21:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:44.139 21:25:22 -- host/discovery.sh@153 -- # get_bdev_list 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # xargs 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:44.139 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:22 -- host/discovery.sh@55 -- # sort 00:29:44.139 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 21:25:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:44.139 21:25:22 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.139 21:25:22 -- common/autotest_common.sh@640 -- # local es=0 00:29:44.139 21:25:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.139 21:25:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:44.139 21:25:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:44.139 21:25:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:44.139 21:25:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.139 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.523 [2024-06-08 21:25:23.216223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.523 [2024-06-08 21:25:23.216619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.523 [2024-06-08 21:25:23.216656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b316c0 with addr=10.0.0.2, port=8010 00:29:45.523 [2024-06-08 21:25:23.216672] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:45.523 [2024-06-08 21:25:23.216685] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:45.523 [2024-06-08 21:25:23.216693] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:46.465 [2024-06-08 21:25:24.218543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-06-08 21:25:24.218982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-06-08 21:25:24.218994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b316c0 with addr=10.0.0.2, port=8010 00:29:46.465 [2024-06-08 21:25:24.219006] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:46.465 [2024-06-08 21:25:24.219013] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:46.465 [2024-06-08 21:25:24.219019] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:47.443 [2024-06-08 21:25:25.220472] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:47.443 request: 00:29:47.443 { 00:29:47.443 "name": "nvme_second", 00:29:47.443 "trtype": "tcp", 00:29:47.443 "traddr": "10.0.0.2", 00:29:47.443 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:47.443 "adrfam": "ipv4", 00:29:47.443 "trsvcid": "8010", 00:29:47.443 "attach_timeout_ms": 3000, 00:29:47.443 "method": "bdev_nvme_start_discovery", 00:29:47.443 "req_id": 1 00:29:47.443 } 00:29:47.443 Got JSON-RPC error response 00:29:47.443 response: 00:29:47.443 { 00:29:47.443 "code": -110, 00:29:47.443 "message": "Connection timed out" 00:29:47.443 } 00:29:47.443 21:25:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:47.443 21:25:25 -- common/autotest_common.sh@643 -- # es=1 00:29:47.443 21:25:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:47.443 21:25:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:47.443 21:25:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:47.443 21:25:25 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:47.443 21:25:25 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:47.443 21:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.443 21:25:25 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:47.443 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:47.443 21:25:25 -- host/discovery.sh@67 -- # sort 00:29:47.443 21:25:25 -- host/discovery.sh@67 -- # xargs 00:29:47.443 21:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.443 21:25:25 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:47.443 21:25:25 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:47.443 21:25:25 -- host/discovery.sh@162 -- # kill 2551497 00:29:47.443 21:25:25 -- host/discovery.sh@163 -- # nvmftestfini 00:29:47.443 21:25:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:47.443 21:25:25 -- nvmf/common.sh@116 -- # sync 00:29:47.443 21:25:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:47.443 21:25:25 -- nvmf/common.sh@119 -- # set +e 00:29:47.443 21:25:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:47.443 21:25:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:47.443 rmmod nvme_tcp 00:29:47.443 rmmod nvme_fabrics 00:29:47.443 rmmod nvme_keyring 00:29:47.443 21:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:47.443 21:25:25 -- nvmf/common.sh@123 -- # set -e 00:29:47.443 21:25:25 -- nvmf/common.sh@124 -- # return 0 00:29:47.443 21:25:25 -- nvmf/common.sh@477 -- # '[' -n 2551296 ']' 00:29:47.443 21:25:25 -- nvmf/common.sh@478 -- # killprocess 2551296 00:29:47.443 21:25:25 -- common/autotest_common.sh@926 -- # '[' -z 2551296 ']' 00:29:47.443 21:25:25 -- common/autotest_common.sh@930 -- # kill -0 2551296 00:29:47.443 21:25:25 -- common/autotest_common.sh@931 -- # uname 00:29:47.443 21:25:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.443 21:25:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2551296 00:29:47.443 21:25:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:47.443 21:25:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:47.443 21:25:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2551296' 00:29:47.443 killing process with pid 2551296 00:29:47.443 21:25:25 -- common/autotest_common.sh@945 -- # kill 2551296 00:29:47.443 21:25:25 -- common/autotest_common.sh@950 -- # wait 2551296 00:29:47.706 21:25:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:47.706 21:25:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:47.706 21:25:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:47.706 21:25:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.706 21:25:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:47.706 21:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.706 21:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.706 21:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.620 21:25:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:49.620 00:29:49.620 real 0m22.601s 00:29:49.620 user 0m28.865s 00:29:49.620 sys 0m6.696s 00:29:49.620 21:25:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.620 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:49.620 ************************************ 00:29:49.620 END TEST nvmf_discovery 00:29:49.620 ************************************ 00:29:49.620 21:25:27 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:49.620 21:25:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:49.620 21:25:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:49.620 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:49.620 ************************************ 00:29:49.620 START TEST nvmf_discovery_remove_ifc 00:29:49.620 ************************************ 00:29:49.620 21:25:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:49.881 * Looking for test storage... 00:29:49.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.881 21:25:27 -- nvmf/common.sh@7 -- # uname -s 00:29:49.881 21:25:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.881 21:25:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.881 21:25:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.881 21:25:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.881 21:25:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.881 21:25:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.881 21:25:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.881 21:25:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.881 21:25:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.881 21:25:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.881 21:25:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.881 21:25:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.881 21:25:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.881 21:25:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.881 21:25:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.881 21:25:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.881 21:25:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.881 21:25:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.881 21:25:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.881 21:25:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.881 21:25:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.881 21:25:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.881 21:25:27 -- paths/export.sh@5 -- # export PATH 00:29:49.881 21:25:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.881 21:25:27 -- nvmf/common.sh@46 -- # : 0 00:29:49.881 21:25:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:49.881 21:25:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:49.881 21:25:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:49.881 21:25:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.881 21:25:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.881 21:25:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:49.881 21:25:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:49.881 21:25:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:49.881 21:25:27 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:49.881 21:25:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:49.881 21:25:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.881 21:25:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:49.881 21:25:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:49.881 21:25:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:49.881 21:25:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.881 21:25:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.881 21:25:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.881 21:25:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:49.881 21:25:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:49.881 21:25:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:49.881 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:29:56.466 21:25:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:56.466 21:25:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:56.466 21:25:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:56.466 21:25:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:56.466 21:25:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:56.466 21:25:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:56.466 21:25:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:56.466 21:25:34 -- nvmf/common.sh@294 -- # net_devs=() 00:29:56.466 21:25:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:56.466 21:25:34 -- nvmf/common.sh@295 -- # e810=() 00:29:56.466 21:25:34 -- nvmf/common.sh@295 -- # local -ga e810 00:29:56.466 21:25:34 -- nvmf/common.sh@296 -- # x722=() 00:29:56.466 21:25:34 -- nvmf/common.sh@296 -- # local -ga x722 00:29:56.466 21:25:34 -- nvmf/common.sh@297 -- # mlx=() 00:29:56.466 21:25:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:56.466 21:25:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.466 21:25:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:56.466 21:25:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:56.466 21:25:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:56.466 21:25:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:56.466 21:25:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:56.466 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:56.466 21:25:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:56.466 21:25:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:56.466 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:56.466 21:25:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:56.466 21:25:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:56.466 21:25:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:56.466 21:25:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.466 21:25:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:56.466 21:25:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.466 21:25:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:56.466 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:56.466 21:25:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.466 21:25:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:56.466 21:25:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.466 21:25:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:56.466 21:25:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.466 21:25:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:56.466 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:56.466 21:25:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.466 21:25:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:56.467 21:25:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:56.467 21:25:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:56.467 21:25:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:56.467 21:25:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:56.467 21:25:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.467 21:25:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.467 21:25:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.467 21:25:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:56.467 21:25:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.467 21:25:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.467 21:25:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:56.467 21:25:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.467 21:25:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.467 21:25:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:56.467 21:25:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:56.467 21:25:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.467 21:25:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.726 21:25:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.727 21:25:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.727 21:25:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:56.727 21:25:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.727 21:25:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.727 21:25:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.727 21:25:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:56.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:29:56.727 00:29:56.727 --- 10.0.0.2 ping statistics --- 00:29:56.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.727 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:29:56.727 21:25:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:29:56.727 00:29:56.727 --- 10.0.0.1 ping statistics --- 00:29:56.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.727 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:29:56.727 21:25:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.727 21:25:34 -- nvmf/common.sh@410 -- # return 0 00:29:56.727 21:25:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:56.727 21:25:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.727 21:25:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:56.727 21:25:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:56.727 21:25:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.727 21:25:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:56.727 21:25:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:56.727 21:25:34 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:56.727 21:25:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:56.727 21:25:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:56.727 21:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.727 21:25:34 -- nvmf/common.sh@469 -- # nvmfpid=2558140 00:29:56.727 21:25:34 -- nvmf/common.sh@470 -- # waitforlisten 2558140 00:29:56.727 21:25:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:56.727 21:25:34 -- common/autotest_common.sh@819 -- # '[' -z 2558140 ']' 00:29:56.727 21:25:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.727 21:25:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.727 21:25:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.727 21:25:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.727 21:25:34 -- common/autotest_common.sh@10 -- # set +x 00:29:56.987 [2024-06-08 21:25:34.842727] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:56.987 [2024-06-08 21:25:34.842798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.987 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.987 [2024-06-08 21:25:34.938536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.987 [2024-06-08 21:25:35.028383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:56.987 [2024-06-08 21:25:35.028555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.987 [2024-06-08 21:25:35.028566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.987 [2024-06-08 21:25:35.028573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.987 [2024-06-08 21:25:35.028600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.558 21:25:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.558 21:25:35 -- common/autotest_common.sh@852 -- # return 0 00:29:57.558 21:25:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:57.558 21:25:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:57.558 21:25:35 -- common/autotest_common.sh@10 -- # set +x 00:29:57.819 21:25:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.819 21:25:35 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:57.819 21:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.819 21:25:35 -- common/autotest_common.sh@10 -- # set +x 00:29:57.819 [2024-06-08 21:25:35.684469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.819 [2024-06-08 21:25:35.692663] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:57.819 null0 00:29:57.819 [2024-06-08 21:25:35.724644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.819 21:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.819 21:25:35 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2558223 00:29:57.819 21:25:35 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2558223 /tmp/host.sock 00:29:57.819 21:25:35 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:57.819 21:25:35 -- common/autotest_common.sh@819 -- # '[' -z 2558223 ']' 00:29:57.819 21:25:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:57.819 21:25:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:57.819 21:25:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:57.819 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:57.819 21:25:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:57.819 21:25:35 -- common/autotest_common.sh@10 -- # set +x 00:29:57.819 [2024-06-08 21:25:35.796090] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:57.819 [2024-06-08 21:25:35.796154] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558223 ] 00:29:57.819 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.819 [2024-06-08 21:25:35.859485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.079 [2024-06-08 21:25:35.932108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:58.079 [2024-06-08 21:25:35.932239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.649 21:25:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:58.649 21:25:36 -- common/autotest_common.sh@852 -- # return 0 00:29:58.649 21:25:36 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:58.649 21:25:36 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:58.649 21:25:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.649 21:25:36 -- common/autotest_common.sh@10 -- # set +x 00:29:58.649 21:25:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.649 21:25:36 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:58.649 21:25:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.649 21:25:36 -- common/autotest_common.sh@10 -- # set +x 00:29:58.649 21:25:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.649 21:25:36 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:58.649 21:25:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.649 21:25:36 -- common/autotest_common.sh@10 -- # set +x 00:30:00.031 [2024-06-08 21:25:37.693729] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:00.031 [2024-06-08 21:25:37.693751] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:00.031 [2024-06-08 21:25:37.693765] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:00.031 [2024-06-08 21:25:37.824185] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:00.031 [2024-06-08 21:25:38.049276] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:00.031 [2024-06-08 21:25:38.049318] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:00.031 [2024-06-08 21:25:38.049341] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:00.031 [2024-06-08 21:25:38.049355] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:00.031 [2024-06-08 21:25:38.049374] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:00.031 21:25:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:00.031 [2024-06-08 21:25:38.053221] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19b04d0 was disconnected and freed. delete nvme_qpair. 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.031 21:25:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.031 21:25:38 -- common/autotest_common.sh@10 -- # set +x 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.031 21:25:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:00.031 21:25:38 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.292 21:25:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.292 21:25:38 -- common/autotest_common.sh@10 -- # set +x 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.292 21:25:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:00.292 21:25:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:01.232 21:25:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:01.232 21:25:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.232 21:25:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:01.232 21:25:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:01.232 21:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.232 21:25:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:01.232 21:25:39 -- common/autotest_common.sh@10 -- # set +x 00:30:01.232 21:25:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.492 21:25:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:01.492 21:25:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.432 21:25:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.432 21:25:40 -- common/autotest_common.sh@10 -- # set +x 00:30:02.432 21:25:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:02.432 21:25:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.372 21:25:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.372 21:25:41 -- common/autotest_common.sh@10 -- # set +x 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.372 21:25:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:03.372 21:25:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.754 21:25:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:04.754 21:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.754 21:25:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:04.754 21:25:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:05.695 [2024-06-08 21:25:43.489916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:05.695 [2024-06-08 21:25:43.489958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.695 [2024-06-08 21:25:43.489970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.695 [2024-06-08 21:25:43.489979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.695 [2024-06-08 21:25:43.489987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.695 [2024-06-08 21:25:43.489995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.695 [2024-06-08 21:25:43.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.695 [2024-06-08 21:25:43.490010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.695 [2024-06-08 21:25:43.490017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.695 [2024-06-08 21:25:43.490025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.695 [2024-06-08 21:25:43.490032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.695 [2024-06-08 21:25:43.490039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1976b40 is same with the state(5) to be set 00:30:05.695 [2024-06-08 21:25:43.499936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1976b40 (9): Bad file descriptor 00:30:05.695 [2024-06-08 21:25:43.509977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:05.695 21:25:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.695 21:25:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.695 21:25:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.695 21:25:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.695 21:25:43 -- common/autotest_common.sh@10 -- # set +x 00:30:05.695 21:25:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.695 21:25:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.636 [2024-06-08 21:25:44.562426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:07.576 [2024-06-08 21:25:45.586441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:07.576 [2024-06-08 21:25:45.586484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1976b40 with addr=10.0.0.2, port=4420 00:30:07.576 [2024-06-08 21:25:45.586498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1976b40 is same with the state(5) to be set 00:30:07.576 [2024-06-08 21:25:45.586842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1976b40 (9): Bad file descriptor 00:30:07.576 [2024-06-08 21:25:45.586865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:07.576 [2024-06-08 21:25:45.586885] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:07.576 [2024-06-08 21:25:45.586908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.576 [2024-06-08 21:25:45.586918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.576 [2024-06-08 21:25:45.586928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.576 [2024-06-08 21:25:45.586935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.576 [2024-06-08 21:25:45.586943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.576 [2024-06-08 21:25:45.586950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.576 [2024-06-08 21:25:45.586959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.576 [2024-06-08 21:25:45.586966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.576 [2024-06-08 21:25:45.586974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:07.576 [2024-06-08 21:25:45.586982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:07.576 [2024-06-08 21:25:45.586990] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:07.576 [2024-06-08 21:25:45.587526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1976f50 (9): Bad file descriptor 00:30:07.576 [2024-06-08 21:25:45.588537] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:07.576 [2024-06-08 21:25:45.588549] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:07.576 21:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.576 21:25:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:07.576 21:25:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:08.958 21:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:08.958 21:25:46 -- common/autotest_common.sh@10 -- # set +x 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:08.958 21:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:08.958 21:25:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.958 21:25:46 -- common/autotest_common.sh@10 -- # set +x 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:08.958 21:25:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:08.958 21:25:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:09.901 [2024-06-08 21:25:47.650581] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:09.901 [2024-06-08 21:25:47.650602] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:09.901 [2024-06-08 21:25:47.650616] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:09.901 [2024-06-08 21:25:47.779029] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:09.901 21:25:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:09.901 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:09.901 [2024-06-08 21:25:47.837828] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:09.901 [2024-06-08 21:25:47.837863] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:09.901 [2024-06-08 21:25:47.837883] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:09.901 [2024-06-08 21:25:47.837896] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:09.901 [2024-06-08 21:25:47.837904] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:09.901 21:25:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.901 [2024-06-08 21:25:47.847089] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1984780 was disconnected and freed. delete nvme_qpair. 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:09.901 21:25:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:10.844 21:25:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:10.844 21:25:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.844 21:25:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:10.844 21:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:10.844 21:25:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:10.844 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:30:10.844 21:25:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.844 21:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.137 21:25:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:11.137 21:25:48 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:11.137 21:25:48 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2558223 00:30:11.137 21:25:48 -- common/autotest_common.sh@926 -- # '[' -z 2558223 ']' 00:30:11.137 21:25:48 -- common/autotest_common.sh@930 -- # kill -0 2558223 00:30:11.137 21:25:48 -- common/autotest_common.sh@931 -- # uname 00:30:11.137 21:25:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.137 21:25:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2558223 00:30:11.137 21:25:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:11.137 21:25:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:11.137 21:25:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2558223' 00:30:11.137 killing process with pid 2558223 00:30:11.137 21:25:48 -- common/autotest_common.sh@945 -- # kill 2558223 00:30:11.137 21:25:48 -- common/autotest_common.sh@950 -- # wait 2558223 00:30:11.137 21:25:49 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:11.137 21:25:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:11.137 21:25:49 -- nvmf/common.sh@116 -- # sync 00:30:11.137 21:25:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:11.137 21:25:49 -- nvmf/common.sh@119 -- # set +e 00:30:11.137 21:25:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:11.137 21:25:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:11.137 rmmod nvme_tcp 00:30:11.137 rmmod nvme_fabrics 00:30:11.137 rmmod nvme_keyring 00:30:11.137 21:25:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:11.137 21:25:49 -- nvmf/common.sh@123 -- # set -e 00:30:11.137 21:25:49 -- nvmf/common.sh@124 -- # return 0 00:30:11.137 21:25:49 -- nvmf/common.sh@477 -- # '[' -n 2558140 ']' 00:30:11.137 21:25:49 -- nvmf/common.sh@478 -- # killprocess 2558140 00:30:11.137 21:25:49 -- common/autotest_common.sh@926 -- # '[' -z 2558140 ']' 00:30:11.137 21:25:49 -- common/autotest_common.sh@930 -- # kill -0 2558140 00:30:11.137 21:25:49 -- common/autotest_common.sh@931 -- # uname 00:30:11.137 21:25:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.137 21:25:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2558140 00:30:11.398 21:25:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:11.398 21:25:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:11.398 21:25:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2558140' 00:30:11.398 killing process with pid 2558140 00:30:11.398 21:25:49 -- common/autotest_common.sh@945 -- # kill 2558140 00:30:11.399 21:25:49 -- common/autotest_common.sh@950 -- # wait 2558140 00:30:11.399 21:25:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:11.399 21:25:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:11.399 21:25:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:11.399 21:25:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.399 21:25:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:11.399 21:25:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.399 21:25:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.399 21:25:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.947 21:25:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:13.947 00:30:13.947 real 0m23.787s 00:30:13.947 user 0m28.300s 00:30:13.947 sys 0m6.441s 00:30:13.947 21:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.947 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:30:13.947 ************************************ 00:30:13.947 END TEST nvmf_discovery_remove_ifc 00:30:13.947 ************************************ 00:30:13.947 21:25:51 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:13.947 21:25:51 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:13.947 21:25:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:13.947 21:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.947 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:30:13.947 ************************************ 00:30:13.947 START TEST nvmf_digest 00:30:13.947 ************************************ 00:30:13.947 21:25:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:13.947 * Looking for test storage... 00:30:13.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.947 21:25:51 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.947 21:25:51 -- nvmf/common.sh@7 -- # uname -s 00:30:13.947 21:25:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.947 21:25:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.947 21:25:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.947 21:25:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.947 21:25:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.947 21:25:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.947 21:25:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.947 21:25:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.948 21:25:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.948 21:25:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.948 21:25:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:13.948 21:25:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:13.948 21:25:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.948 21:25:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.948 21:25:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.948 21:25:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.948 21:25:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.948 21:25:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.948 21:25:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.948 21:25:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.948 21:25:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.948 21:25:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.948 21:25:51 -- paths/export.sh@5 -- # export PATH 00:30:13.948 21:25:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.948 21:25:51 -- nvmf/common.sh@46 -- # : 0 00:30:13.948 21:25:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:13.948 21:25:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:13.948 21:25:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:13.948 21:25:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.948 21:25:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.948 21:25:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:13.948 21:25:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:13.948 21:25:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:13.948 21:25:51 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:13.948 21:25:51 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:13.948 21:25:51 -- host/digest.sh@16 -- # runtime=2 00:30:13.948 21:25:51 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:13.948 21:25:51 -- host/digest.sh@132 -- # nvmftestinit 00:30:13.948 21:25:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:13.948 21:25:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.948 21:25:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:13.948 21:25:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:13.948 21:25:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:13.948 21:25:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.948 21:25:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.948 21:25:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.948 21:25:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:13.948 21:25:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:13.948 21:25:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:13.948 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:30:20.538 21:25:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:20.538 21:25:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:20.538 21:25:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:20.538 21:25:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:20.538 21:25:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:20.538 21:25:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:20.538 21:25:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:20.538 21:25:58 -- nvmf/common.sh@294 -- # net_devs=() 00:30:20.538 21:25:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:20.538 21:25:58 -- nvmf/common.sh@295 -- # e810=() 00:30:20.538 21:25:58 -- nvmf/common.sh@295 -- # local -ga e810 00:30:20.538 21:25:58 -- nvmf/common.sh@296 -- # x722=() 00:30:20.538 21:25:58 -- nvmf/common.sh@296 -- # local -ga x722 00:30:20.538 21:25:58 -- nvmf/common.sh@297 -- # mlx=() 00:30:20.538 21:25:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:20.538 21:25:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.538 21:25:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:20.538 21:25:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:20.538 21:25:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:20.538 21:25:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:20.538 21:25:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:20.538 21:25:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:20.538 21:25:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:20.539 21:25:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:20.539 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:20.539 21:25:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:20.539 21:25:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:20.539 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:20.539 21:25:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:20.539 21:25:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:20.539 21:25:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.539 21:25:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:20.539 21:25:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.539 21:25:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:20.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:20.539 21:25:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.539 21:25:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:20.539 21:25:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.539 21:25:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:20.539 21:25:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.539 21:25:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:20.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:20.539 21:25:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.539 21:25:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:20.539 21:25:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:20.539 21:25:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:20.539 21:25:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.539 21:25:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.539 21:25:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.539 21:25:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:20.539 21:25:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.539 21:25:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.539 21:25:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:20.539 21:25:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.539 21:25:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.539 21:25:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:20.539 21:25:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:20.539 21:25:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.539 21:25:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.539 21:25:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.539 21:25:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.539 21:25:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:20.539 21:25:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.539 21:25:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.539 21:25:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.539 21:25:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:20.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:30:20.539 00:30:20.539 --- 10.0.0.2 ping statistics --- 00:30:20.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.539 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:30:20.539 21:25:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:30:20.539 00:30:20.539 --- 10.0.0.1 ping statistics --- 00:30:20.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.539 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:30:20.539 21:25:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.539 21:25:58 -- nvmf/common.sh@410 -- # return 0 00:30:20.539 21:25:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:20.539 21:25:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.539 21:25:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:20.539 21:25:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.539 21:25:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:20.539 21:25:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:20.801 21:25:58 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:20.801 21:25:58 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:20.801 21:25:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:20.801 21:25:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:20.801 21:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:20.801 ************************************ 00:30:20.801 START TEST nvmf_digest_clean 00:30:20.801 ************************************ 00:30:20.801 21:25:58 -- common/autotest_common.sh@1104 -- # run_digest 00:30:20.801 21:25:58 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:20.801 21:25:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:20.801 21:25:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:20.801 21:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:20.801 21:25:58 -- nvmf/common.sh@469 -- # nvmfpid=2565012 00:30:20.801 21:25:58 -- nvmf/common.sh@470 -- # waitforlisten 2565012 00:30:20.801 21:25:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:20.801 21:25:58 -- common/autotest_common.sh@819 -- # '[' -z 2565012 ']' 00:30:20.801 21:25:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.801 21:25:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:20.801 21:25:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.801 21:25:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:20.801 21:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:20.801 [2024-06-08 21:25:58.734080] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:20.801 [2024-06-08 21:25:58.734165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.801 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.801 [2024-06-08 21:25:58.804971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.801 [2024-06-08 21:25:58.877365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:20.801 [2024-06-08 21:25:58.877492] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.801 [2024-06-08 21:25:58.877500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.801 [2024-06-08 21:25:58.877508] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.801 [2024-06-08 21:25:58.877531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.743 21:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:21.743 21:25:59 -- common/autotest_common.sh@852 -- # return 0 00:30:21.743 21:25:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:21.743 21:25:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:21.743 21:25:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.743 21:25:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.743 21:25:59 -- host/digest.sh@120 -- # common_target_config 00:30:21.743 21:25:59 -- host/digest.sh@43 -- # rpc_cmd 00:30:21.743 21:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.743 21:25:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.743 null0 00:30:21.743 [2024-06-08 21:25:59.600104] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.743 [2024-06-08 21:25:59.624300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.743 21:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.743 21:25:59 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:21.743 21:25:59 -- host/digest.sh@77 -- # local rw bs qd 00:30:21.743 21:25:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:21.743 21:25:59 -- host/digest.sh@80 -- # rw=randread 00:30:21.743 21:25:59 -- host/digest.sh@80 -- # bs=4096 00:30:21.743 21:25:59 -- host/digest.sh@80 -- # qd=128 00:30:21.743 21:25:59 -- host/digest.sh@82 -- # bperfpid=2565293 00:30:21.743 21:25:59 -- host/digest.sh@83 -- # waitforlisten 2565293 /var/tmp/bperf.sock 00:30:21.743 21:25:59 -- common/autotest_common.sh@819 -- # '[' -z 2565293 ']' 00:30:21.743 21:25:59 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:21.743 21:25:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.743 21:25:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:21.743 21:25:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.743 21:25:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:21.743 21:25:59 -- common/autotest_common.sh@10 -- # set +x 00:30:21.743 [2024-06-08 21:25:59.678944] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:21.743 [2024-06-08 21:25:59.679034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565293 ] 00:30:21.743 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.743 [2024-06-08 21:25:59.762672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.743 [2024-06-08 21:25:59.824920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.686 21:26:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:22.686 21:26:00 -- common/autotest_common.sh@852 -- # return 0 00:30:22.686 21:26:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:22.686 21:26:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:22.686 21:26:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:22.686 21:26:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.686 21:26:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.947 nvme0n1 00:30:22.947 21:26:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:22.947 21:26:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:22.947 Running I/O for 2 seconds... 00:30:25.490 00:30:25.490 Latency(us) 00:30:25.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.490 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:25.490 nvme0n1 : 2.00 22667.08 88.54 0.00 0.00 5639.44 3017.39 16602.45 00:30:25.490 =================================================================================================================== 00:30:25.490 Total : 22667.08 88.54 0.00 0.00 5639.44 3017.39 16602.45 00:30:25.491 0 00:30:25.491 21:26:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:25.491 21:26:02 -- host/digest.sh@92 -- # get_accel_stats 00:30:25.491 21:26:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:25.491 21:26:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:25.491 | select(.opcode=="crc32c") 00:30:25.491 | "\(.module_name) \(.executed)"' 00:30:25.491 21:26:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:25.491 21:26:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:25.491 21:26:03 -- host/digest.sh@93 -- # exp_module=software 00:30:25.491 21:26:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:25.491 21:26:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:25.491 21:26:03 -- host/digest.sh@97 -- # killprocess 2565293 00:30:25.491 21:26:03 -- common/autotest_common.sh@926 -- # '[' -z 2565293 ']' 00:30:25.491 21:26:03 -- common/autotest_common.sh@930 -- # kill -0 2565293 00:30:25.491 21:26:03 -- common/autotest_common.sh@931 -- # uname 00:30:25.491 21:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:25.491 21:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2565293 00:30:25.491 21:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:25.491 21:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:25.491 21:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2565293' 00:30:25.491 killing process with pid 2565293 00:30:25.491 21:26:03 -- common/autotest_common.sh@945 -- # kill 2565293 00:30:25.491 Received shutdown signal, test time was about 2.000000 seconds 00:30:25.491 00:30:25.491 Latency(us) 00:30:25.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.491 =================================================================================================================== 00:30:25.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.491 21:26:03 -- common/autotest_common.sh@950 -- # wait 2565293 00:30:25.491 21:26:03 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:25.491 21:26:03 -- host/digest.sh@77 -- # local rw bs qd 00:30:25.491 21:26:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:25.491 21:26:03 -- host/digest.sh@80 -- # rw=randread 00:30:25.491 21:26:03 -- host/digest.sh@80 -- # bs=131072 00:30:25.491 21:26:03 -- host/digest.sh@80 -- # qd=16 00:30:25.491 21:26:03 -- host/digest.sh@82 -- # bperfpid=2566053 00:30:25.491 21:26:03 -- host/digest.sh@83 -- # waitforlisten 2566053 /var/tmp/bperf.sock 00:30:25.491 21:26:03 -- common/autotest_common.sh@819 -- # '[' -z 2566053 ']' 00:30:25.491 21:26:03 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:25.491 21:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.491 21:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:25.491 21:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.491 21:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:25.491 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:30:25.491 [2024-06-08 21:26:03.385357] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:25.491 [2024-06-08 21:26:03.385434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566053 ] 00:30:25.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:25.491 Zero copy mechanism will not be used. 00:30:25.491 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.491 [2024-06-08 21:26:03.463451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.491 [2024-06-08 21:26:03.524208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.061 21:26:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:26.061 21:26:04 -- common/autotest_common.sh@852 -- # return 0 00:30:26.061 21:26:04 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:26.061 21:26:04 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:26.061 21:26:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:26.322 21:26:04 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.322 21:26:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.583 nvme0n1 00:30:26.583 21:26:04 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:26.583 21:26:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:26.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:26.583 Zero copy mechanism will not be used. 00:30:26.583 Running I/O for 2 seconds... 00:30:29.124 00:30:29.124 Latency(us) 00:30:29.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.124 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:29.124 nvme0n1 : 2.01 2022.93 252.87 0.00 0.00 7905.63 5270.19 14745.60 00:30:29.124 =================================================================================================================== 00:30:29.124 Total : 2022.93 252.87 0.00 0.00 7905.63 5270.19 14745.60 00:30:29.124 0 00:30:29.124 21:26:06 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:29.124 21:26:06 -- host/digest.sh@92 -- # get_accel_stats 00:30:29.124 21:26:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:29.124 21:26:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:29.124 | select(.opcode=="crc32c") 00:30:29.124 | "\(.module_name) \(.executed)"' 00:30:29.124 21:26:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:29.124 21:26:06 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:29.124 21:26:06 -- host/digest.sh@93 -- # exp_module=software 00:30:29.124 21:26:06 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:29.124 21:26:06 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:29.124 21:26:06 -- host/digest.sh@97 -- # killprocess 2566053 00:30:29.124 21:26:06 -- common/autotest_common.sh@926 -- # '[' -z 2566053 ']' 00:30:29.124 21:26:06 -- common/autotest_common.sh@930 -- # kill -0 2566053 00:30:29.124 21:26:06 -- common/autotest_common.sh@931 -- # uname 00:30:29.124 21:26:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:29.124 21:26:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2566053 00:30:29.124 21:26:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:29.124 21:26:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:29.124 21:26:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2566053' 00:30:29.124 killing process with pid 2566053 00:30:29.124 21:26:06 -- common/autotest_common.sh@945 -- # kill 2566053 00:30:29.124 Received shutdown signal, test time was about 2.000000 seconds 00:30:29.124 00:30:29.124 Latency(us) 00:30:29.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.124 =================================================================================================================== 00:30:29.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.124 21:26:06 -- common/autotest_common.sh@950 -- # wait 2566053 00:30:29.124 21:26:06 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:29.124 21:26:06 -- host/digest.sh@77 -- # local rw bs qd 00:30:29.124 21:26:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:29.124 21:26:06 -- host/digest.sh@80 -- # rw=randwrite 00:30:29.124 21:26:06 -- host/digest.sh@80 -- # bs=4096 00:30:29.124 21:26:06 -- host/digest.sh@80 -- # qd=128 00:30:29.124 21:26:06 -- host/digest.sh@82 -- # bperfpid=2566747 00:30:29.124 21:26:06 -- host/digest.sh@83 -- # waitforlisten 2566747 /var/tmp/bperf.sock 00:30:29.124 21:26:06 -- common/autotest_common.sh@819 -- # '[' -z 2566747 ']' 00:30:29.124 21:26:06 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:29.124 21:26:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.124 21:26:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:29.124 21:26:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.124 21:26:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:29.124 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.124 [2024-06-08 21:26:07.048292] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:29.124 [2024-06-08 21:26:07.048359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566747 ] 00:30:29.124 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.124 [2024-06-08 21:26:07.124082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.124 [2024-06-08 21:26:07.175700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.077 21:26:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:30.077 21:26:07 -- common/autotest_common.sh@852 -- # return 0 00:30:30.077 21:26:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:30.077 21:26:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:30.077 21:26:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:30.077 21:26:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.077 21:26:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.339 nvme0n1 00:30:30.339 21:26:08 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:30.339 21:26:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.339 Running I/O for 2 seconds... 00:30:32.254 00:30:32.254 Latency(us) 00:30:32.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.254 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.254 nvme0n1 : 2.01 22008.71 85.97 0.00 0.00 5805.49 3495.25 16711.68 00:30:32.254 =================================================================================================================== 00:30:32.254 Total : 22008.71 85.97 0.00 0.00 5805.49 3495.25 16711.68 00:30:32.254 0 00:30:32.516 21:26:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:32.516 21:26:10 -- host/digest.sh@92 -- # get_accel_stats 00:30:32.516 21:26:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:32.516 21:26:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:32.516 | select(.opcode=="crc32c") 00:30:32.516 | "\(.module_name) \(.executed)"' 00:30:32.516 21:26:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:32.516 21:26:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:32.516 21:26:10 -- host/digest.sh@93 -- # exp_module=software 00:30:32.516 21:26:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:32.516 21:26:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:32.516 21:26:10 -- host/digest.sh@97 -- # killprocess 2566747 00:30:32.516 21:26:10 -- common/autotest_common.sh@926 -- # '[' -z 2566747 ']' 00:30:32.516 21:26:10 -- common/autotest_common.sh@930 -- # kill -0 2566747 00:30:32.516 21:26:10 -- common/autotest_common.sh@931 -- # uname 00:30:32.516 21:26:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:32.516 21:26:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2566747 00:30:32.516 21:26:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:32.516 21:26:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:32.516 21:26:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2566747' 00:30:32.516 killing process with pid 2566747 00:30:32.516 21:26:10 -- common/autotest_common.sh@945 -- # kill 2566747 00:30:32.516 Received shutdown signal, test time was about 2.000000 seconds 00:30:32.516 00:30:32.516 Latency(us) 00:30:32.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.516 =================================================================================================================== 00:30:32.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.516 21:26:10 -- common/autotest_common.sh@950 -- # wait 2566747 00:30:32.777 21:26:10 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:32.777 21:26:10 -- host/digest.sh@77 -- # local rw bs qd 00:30:32.777 21:26:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:32.777 21:26:10 -- host/digest.sh@80 -- # rw=randwrite 00:30:32.777 21:26:10 -- host/digest.sh@80 -- # bs=131072 00:30:32.777 21:26:10 -- host/digest.sh@80 -- # qd=16 00:30:32.777 21:26:10 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:32.777 21:26:10 -- host/digest.sh@82 -- # bperfpid=2567435 00:30:32.777 21:26:10 -- host/digest.sh@83 -- # waitforlisten 2567435 /var/tmp/bperf.sock 00:30:32.777 21:26:10 -- common/autotest_common.sh@819 -- # '[' -z 2567435 ']' 00:30:32.777 21:26:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:32.777 21:26:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.777 21:26:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:32.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:32.777 21:26:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.777 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:30:32.777 [2024-06-08 21:26:10.721927] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:32.777 [2024-06-08 21:26:10.721982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567435 ] 00:30:32.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.777 Zero copy mechanism will not be used. 00:30:32.777 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.777 [2024-06-08 21:26:10.796757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.777 [2024-06-08 21:26:10.846836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.721 21:26:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:33.721 21:26:11 -- common/autotest_common.sh@852 -- # return 0 00:30:33.721 21:26:11 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:33.721 21:26:11 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:33.721 21:26:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:33.721 21:26:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.721 21:26:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.981 nvme0n1 00:30:33.981 21:26:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:33.981 21:26:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.981 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.981 Zero copy mechanism will not be used. 00:30:33.981 Running I/O for 2 seconds... 00:30:36.526 00:30:36.526 Latency(us) 00:30:36.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:36.526 nvme0n1 : 2.01 2768.60 346.08 0.00 0.00 5768.92 4068.69 22500.69 00:30:36.526 =================================================================================================================== 00:30:36.526 Total : 2768.60 346.08 0.00 0.00 5768.92 4068.69 22500.69 00:30:36.526 0 00:30:36.526 21:26:14 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:36.526 21:26:14 -- host/digest.sh@92 -- # get_accel_stats 00:30:36.526 21:26:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:36.526 21:26:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:36.526 | select(.opcode=="crc32c") 00:30:36.526 | "\(.module_name) \(.executed)"' 00:30:36.526 21:26:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:36.526 21:26:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:36.526 21:26:14 -- host/digest.sh@93 -- # exp_module=software 00:30:36.526 21:26:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:36.526 21:26:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:36.526 21:26:14 -- host/digest.sh@97 -- # killprocess 2567435 00:30:36.526 21:26:14 -- common/autotest_common.sh@926 -- # '[' -z 2567435 ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@930 -- # kill -0 2567435 00:30:36.526 21:26:14 -- common/autotest_common.sh@931 -- # uname 00:30:36.526 21:26:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2567435 00:30:36.526 21:26:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:36.526 21:26:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2567435' 00:30:36.526 killing process with pid 2567435 00:30:36.526 21:26:14 -- common/autotest_common.sh@945 -- # kill 2567435 00:30:36.526 Received shutdown signal, test time was about 2.000000 seconds 00:30:36.526 00:30:36.526 Latency(us) 00:30:36.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.526 =================================================================================================================== 00:30:36.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.526 21:26:14 -- common/autotest_common.sh@950 -- # wait 2567435 00:30:36.526 21:26:14 -- host/digest.sh@126 -- # killprocess 2565012 00:30:36.526 21:26:14 -- common/autotest_common.sh@926 -- # '[' -z 2565012 ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@930 -- # kill -0 2565012 00:30:36.526 21:26:14 -- common/autotest_common.sh@931 -- # uname 00:30:36.526 21:26:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2565012 00:30:36.526 21:26:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:36.526 21:26:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2565012' 00:30:36.526 killing process with pid 2565012 00:30:36.526 21:26:14 -- common/autotest_common.sh@945 -- # kill 2565012 00:30:36.526 21:26:14 -- common/autotest_common.sh@950 -- # wait 2565012 00:30:36.526 00:30:36.526 real 0m15.870s 00:30:36.526 user 0m31.105s 00:30:36.526 sys 0m3.220s 00:30:36.526 21:26:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.526 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:30:36.526 ************************************ 00:30:36.526 END TEST nvmf_digest_clean 00:30:36.526 ************************************ 00:30:36.526 21:26:14 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:36.526 21:26:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.526 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:30:36.526 ************************************ 00:30:36.526 START TEST nvmf_digest_error 00:30:36.526 ************************************ 00:30:36.526 21:26:14 -- common/autotest_common.sh@1104 -- # run_digest_error 00:30:36.526 21:26:14 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:36.526 21:26:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:36.526 21:26:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:36.526 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:30:36.526 21:26:14 -- nvmf/common.sh@469 -- # nvmfpid=2568155 00:30:36.526 21:26:14 -- nvmf/common.sh@470 -- # waitforlisten 2568155 00:30:36.526 21:26:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:36.526 21:26:14 -- common/autotest_common.sh@819 -- # '[' -z 2568155 ']' 00:30:36.526 21:26:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.526 21:26:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:36.526 21:26:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.526 21:26:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:36.526 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:30:36.790 [2024-06-08 21:26:14.648041] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:36.790 [2024-06-08 21:26:14.648139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.790 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.790 [2024-06-08 21:26:14.720587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.790 [2024-06-08 21:26:14.790053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:36.790 [2024-06-08 21:26:14.790173] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.790 [2024-06-08 21:26:14.790181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.790 [2024-06-08 21:26:14.790188] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.790 [2024-06-08 21:26:14.790206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.360 21:26:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:37.360 21:26:15 -- common/autotest_common.sh@852 -- # return 0 00:30:37.360 21:26:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:37.360 21:26:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:37.360 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:37.621 21:26:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.621 21:26:15 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:37.621 21:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.621 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:37.621 [2024-06-08 21:26:15.464189] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:37.621 21:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.621 21:26:15 -- host/digest.sh@104 -- # common_target_config 00:30:37.621 21:26:15 -- host/digest.sh@43 -- # rpc_cmd 00:30:37.621 21:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.621 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:37.621 null0 00:30:37.621 [2024-06-08 21:26:15.545175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.621 [2024-06-08 21:26:15.569370] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.621 21:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.621 21:26:15 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:37.621 21:26:15 -- host/digest.sh@54 -- # local rw bs qd 00:30:37.621 21:26:15 -- host/digest.sh@56 -- # rw=randread 00:30:37.621 21:26:15 -- host/digest.sh@56 -- # bs=4096 00:30:37.621 21:26:15 -- host/digest.sh@56 -- # qd=128 00:30:37.621 21:26:15 -- host/digest.sh@58 -- # bperfpid=2568505 00:30:37.621 21:26:15 -- host/digest.sh@60 -- # waitforlisten 2568505 /var/tmp/bperf.sock 00:30:37.621 21:26:15 -- common/autotest_common.sh@819 -- # '[' -z 2568505 ']' 00:30:37.622 21:26:15 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:37.622 21:26:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:37.622 21:26:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:37.622 21:26:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:37.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:37.622 21:26:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:37.622 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:30:37.622 [2024-06-08 21:26:15.618822] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:37.622 [2024-06-08 21:26:15.618873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568505 ] 00:30:37.622 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.622 [2024-06-08 21:26:15.694095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.882 [2024-06-08 21:26:15.746185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.454 21:26:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:38.454 21:26:16 -- common/autotest_common.sh@852 -- # return 0 00:30:38.454 21:26:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:38.454 21:26:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:38.715 21:26:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:38.715 21:26:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.715 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:30:38.715 21:26:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.715 21:26:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.715 21:26:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.976 nvme0n1 00:30:38.976 21:26:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:38.976 21:26:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.976 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:30:38.976 21:26:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.976 21:26:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:38.976 21:26:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:38.976 Running I/O for 2 seconds... 00:30:38.976 [2024-06-08 21:26:17.030211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:38.976 [2024-06-08 21:26:17.030241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.976 [2024-06-08 21:26:17.030249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.976 [2024-06-08 21:26:17.043082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:38.976 [2024-06-08 21:26:17.043103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.976 [2024-06-08 21:26:17.043110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.976 [2024-06-08 21:26:17.056854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:38.976 [2024-06-08 21:26:17.056873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.976 [2024-06-08 21:26:17.056880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.977 [2024-06-08 21:26:17.067528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:38.977 [2024-06-08 21:26:17.067547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.977 [2024-06-08 21:26:17.067553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.238 [2024-06-08 21:26:17.079588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.238 [2024-06-08 21:26:17.079606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.238 [2024-06-08 21:26:17.079613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.238 [2024-06-08 21:26:17.090738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.238 [2024-06-08 21:26:17.090755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.238 [2024-06-08 21:26:17.090763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.238 [2024-06-08 21:26:17.101683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.238 [2024-06-08 21:26:17.101700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.238 [2024-06-08 21:26:17.101707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.238 [2024-06-08 21:26:17.113645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.113663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.113670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.124813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.124830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.124841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.135716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.135733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.135740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.147035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.147053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.147059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.158870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.158888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.169879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.169903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.181997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.182014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.182021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.192915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.192933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.192939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.203748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.203766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.203773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.215573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.215591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.215597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.226690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.226714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.226720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.237577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.237595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.237602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.249661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.249679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.249685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.260692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.260710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.260716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.272488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.272506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.272512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.283578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.283596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.283603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.294567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.294585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.294592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.305723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.305747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.317613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.317631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.317637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.239 [2024-06-08 21:26:17.328765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.239 [2024-06-08 21:26:17.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.239 [2024-06-08 21:26:17.328789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.339788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.339806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.339812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.350898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.350916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.362728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.362745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.362751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.373843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.373860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.373866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.384833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.384850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.384856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.396745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.396762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.396768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.407702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.407719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.418847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.418863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.418873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.430768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.430786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.430792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.441829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.441846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.441853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.452976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.452994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.453000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.463891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.463908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.463915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.475657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.475674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.475680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.486445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.486462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.486469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.498469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.498486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.498493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.509407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.509425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.509431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.520383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.520404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.520411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.531623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.531641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.531647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.543518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.543535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.543542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.554644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.501 [2024-06-08 21:26:17.554661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.501 [2024-06-08 21:26:17.554667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.501 [2024-06-08 21:26:17.565560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.502 [2024-06-08 21:26:17.565578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.502 [2024-06-08 21:26:17.565584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.502 [2024-06-08 21:26:17.577097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.502 [2024-06-08 21:26:17.577114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.502 [2024-06-08 21:26:17.577121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.502 [2024-06-08 21:26:17.588894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.502 [2024-06-08 21:26:17.588911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.502 [2024-06-08 21:26:17.588917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.599631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.599648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.599655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.611694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.611711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.611721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.622828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.622845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.622852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.633522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.633540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.633546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.645547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.645564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.645571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.656470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.656494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.667565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.667583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.667589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.679573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.679591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.679597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.690621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.690638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.763 [2024-06-08 21:26:17.690645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.763 [2024-06-08 21:26:17.701522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.763 [2024-06-08 21:26:17.701539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.701546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.712676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.712697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.724462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.724480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.724486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.735616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.735633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.735640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.746445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.758351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.758369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.758375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.769326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.769343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.769350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.780349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.780366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.780372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.792249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.792267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.792273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.803177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.803194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.803201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.814336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.814353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.814360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.826059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.826077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.826083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.836931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.836948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.836955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.764 [2024-06-08 21:26:17.848993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:39.764 [2024-06-08 21:26:17.849010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.764 [2024-06-08 21:26:17.849017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.859982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.859999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.860006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.871094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.871111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.871118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.882844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.882862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.882869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.893922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.893946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.904781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.904799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.904809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.915903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.915920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.915926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.927875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.927892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.927898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.939038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.939055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.939063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.950919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.950936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.950942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.961590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.961607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.961614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.972641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.972659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.972665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.983783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.983801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.983807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:17.995699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:17.995716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:17.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:18.006741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.026 [2024-06-08 21:26:18.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.026 [2024-06-08 21:26:18.006765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.026 [2024-06-08 21:26:18.017713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.017736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.029787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.029804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.029810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.040894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.040911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.040917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.052113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.052130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.052137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.062981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.062999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.063005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.074806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.074823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.074830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.085959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.085976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.085982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.096796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.096813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.096823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.027 [2024-06-08 21:26:18.108036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.027 [2024-06-08 21:26:18.108054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.027 [2024-06-08 21:26:18.108060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.288 [2024-06-08 21:26:18.119975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.288 [2024-06-08 21:26:18.119993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.288 [2024-06-08 21:26:18.120000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.288 [2024-06-08 21:26:18.131290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.288 [2024-06-08 21:26:18.131308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.288 [2024-06-08 21:26:18.131315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.288 [2024-06-08 21:26:18.142086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.288 [2024-06-08 21:26:18.142104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.288 [2024-06-08 21:26:18.142111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.288 [2024-06-08 21:26:18.154781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.288 [2024-06-08 21:26:18.154798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.166374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.166391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.166398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.177218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.177235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.177241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.188842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.188859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.188866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.199799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.199819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.199826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.211509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.211526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.222353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.222370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.222376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.234055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.234072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.234078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.245148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.245165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.245171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.256100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.256118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.256124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.267927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.267944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.267950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.279005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.279022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.279029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.290062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.290079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.290085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.301917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.301933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.301940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.313044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.313061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.313068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.324769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.324793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.335777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.335795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.335801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.346851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.346869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.346877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.358524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.358541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.358548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.289 [2024-06-08 21:26:18.369668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.289 [2024-06-08 21:26:18.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.289 [2024-06-08 21:26:18.369693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.381364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.381382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.381388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.393253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.393270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.393280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.404768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.404785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.404792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.415767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.415784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.415791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.428442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.428459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.428465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.438842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.438859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.438866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.450193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.450211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.450218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.461267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.461284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.461291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.472994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.473011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.551 [2024-06-08 21:26:18.473018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.551 [2024-06-08 21:26:18.483877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.551 [2024-06-08 21:26:18.483894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.483901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.495020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.495038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.495044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.506934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.506952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.506959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.518080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.518097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.529246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.529271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.540099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.540116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.540122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.552050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.552067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.552074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.563032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.563049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.563056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.574194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.585998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.586016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.586025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.596854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.596871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.596878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.608068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.608085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.608092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.620107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.620123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.552 [2024-06-08 21:26:18.631184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.552 [2024-06-08 21:26:18.631202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.552 [2024-06-08 21:26:18.631208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.642196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.642215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.642224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.653949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.653966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.653973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.664737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.664754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.664760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.676520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.676537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.676544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.687608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.687632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.687639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.698789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.698813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.710693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.710717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.721788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.721806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.721812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.732814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.732839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.744639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.744657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.744664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.755753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.755770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.755776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.766652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.766669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.766677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.778596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.778613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.778620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.789642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.789659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.789666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.800720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.800737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.800744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.811668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.811686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.811693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.823791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.823814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.814 [2024-06-08 21:26:18.834869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.814 [2024-06-08 21:26:18.834886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.814 [2024-06-08 21:26:18.834893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.845433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.845450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.845457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.858028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.858045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.858051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.868731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.868755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.880651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.880667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.880676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.891663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.891681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.891687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.815 [2024-06-08 21:26:18.902650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:40.815 [2024-06-08 21:26:18.902668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.815 [2024-06-08 21:26:18.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.913796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.913814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.913820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.925829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.925854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.936734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.936751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.936758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.947694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.947711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.947717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.959902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.959919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.959926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.970848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.970865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.970872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.981739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.981766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:18.993691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:18.993708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:18.993714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:19.004773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:19.004790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:19.004797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 [2024-06-08 21:26:19.014795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a39600) 00:30:41.077 [2024-06-08 21:26:19.014812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.077 [2024-06-08 21:26:19.014818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.077 00:30:41.077 Latency(us) 00:30:41.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.077 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:41.077 nvme0n1 : 2.00 22425.43 87.60 0.00 0.00 5701.01 3467.95 16056.32 00:30:41.077 =================================================================================================================== 00:30:41.077 Total : 22425.43 87.60 0.00 0.00 5701.01 3467.95 16056.32 00:30:41.077 0 00:30:41.077 21:26:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:41.077 21:26:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:41.077 21:26:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:41.077 | .driver_specific 00:30:41.077 | .nvme_error 00:30:41.077 | .status_code 00:30:41.077 | .command_transient_transport_error' 00:30:41.077 21:26:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:41.338 21:26:19 -- host/digest.sh@71 -- # (( 176 > 0 )) 00:30:41.338 21:26:19 -- host/digest.sh@73 -- # killprocess 2568505 00:30:41.338 21:26:19 -- common/autotest_common.sh@926 -- # '[' -z 2568505 ']' 00:30:41.338 21:26:19 -- common/autotest_common.sh@930 -- # kill -0 2568505 00:30:41.338 21:26:19 -- common/autotest_common.sh@931 -- # uname 00:30:41.338 21:26:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:41.338 21:26:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2568505 00:30:41.338 21:26:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:41.338 21:26:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:41.338 21:26:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2568505' 00:30:41.338 killing process with pid 2568505 00:30:41.338 21:26:19 -- common/autotest_common.sh@945 -- # kill 2568505 00:30:41.338 Received shutdown signal, test time was about 2.000000 seconds 00:30:41.338 00:30:41.338 Latency(us) 00:30:41.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.338 =================================================================================================================== 00:30:41.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.339 21:26:19 -- common/autotest_common.sh@950 -- # wait 2568505 00:30:41.339 21:26:19 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:41.339 21:26:19 -- host/digest.sh@54 -- # local rw bs qd 00:30:41.339 21:26:19 -- host/digest.sh@56 -- # rw=randread 00:30:41.339 21:26:19 -- host/digest.sh@56 -- # bs=131072 00:30:41.339 21:26:19 -- host/digest.sh@56 -- # qd=16 00:30:41.339 21:26:19 -- host/digest.sh@58 -- # bperfpid=2569203 00:30:41.339 21:26:19 -- host/digest.sh@60 -- # waitforlisten 2569203 /var/tmp/bperf.sock 00:30:41.339 21:26:19 -- common/autotest_common.sh@819 -- # '[' -z 2569203 ']' 00:30:41.339 21:26:19 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:41.339 21:26:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:41.339 21:26:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:41.339 21:26:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:41.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:41.339 21:26:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:41.339 21:26:19 -- common/autotest_common.sh@10 -- # set +x 00:30:41.339 [2024-06-08 21:26:19.390605] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:41.339 [2024-06-08 21:26:19.390659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569203 ] 00:30:41.339 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:41.339 Zero copy mechanism will not be used. 00:30:41.339 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.600 [2024-06-08 21:26:19.440382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.600 [2024-06-08 21:26:19.491865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.172 21:26:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:42.172 21:26:20 -- common/autotest_common.sh@852 -- # return 0 00:30:42.172 21:26:20 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:42.172 21:26:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:42.433 21:26:20 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:42.433 21:26:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.433 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:30:42.433 21:26:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.433 21:26:20 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.433 21:26:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.695 nvme0n1 00:30:42.695 21:26:20 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:42.695 21:26:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.695 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:30:42.695 21:26:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.695 21:26:20 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:42.695 21:26:20 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:42.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:42.695 Zero copy mechanism will not be used. 00:30:42.695 Running I/O for 2 seconds... 00:30:42.695 [2024-06-08 21:26:20.678112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.678143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.678153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.695260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.695282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.695289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.711121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.711141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.711148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.726375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.726394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.743062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.743080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.743087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.758892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.758910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.758917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.695 [2024-06-08 21:26:20.774736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.695 [2024-06-08 21:26:20.774754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.695 [2024-06-08 21:26:20.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.792019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.792038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.792044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.809085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.809103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.809110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.825030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.825049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.825059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.840765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.840783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.840789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.857336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.857354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.857360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.873017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.873034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.873041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.889960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.889978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.889985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.907052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.907069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.907076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.922119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.922136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.922143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.936817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.936834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.936840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.955776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.955794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.955800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.972463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.972481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.972488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:20.990020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:20.990038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:20.990045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:21.003930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:21.003949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:21.003955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:21.016589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:21.016607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:21.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:21.029975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:21.029993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:21.029999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:42.957 [2024-06-08 21:26:21.044382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:42.957 [2024-06-08 21:26:21.044400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.957 [2024-06-08 21:26:21.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.058533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.058552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.058558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.072707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.072731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.085866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.085883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.085894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.099169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.099187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.111850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.111868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.111875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.126255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.126273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.126279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.139772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.139790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.153557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.219 [2024-06-08 21:26:21.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.219 [2024-06-08 21:26:21.167429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.219 [2024-06-08 21:26:21.167447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.167455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.181076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.181095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.181101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.194330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.194348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.194354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.207471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.207493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.207499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.222019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.222037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.222043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.234802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.234820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.234826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.249321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.249339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.249345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.261976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.261993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.262000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.277037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.277055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.277063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.292723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.292741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.292747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.220 [2024-06-08 21:26:21.306372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.220 [2024-06-08 21:26:21.306390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.220 [2024-06-08 21:26:21.306397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.319642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.319660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.319667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.333094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.333111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.345821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.345839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.345846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.359079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.359097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.359104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.373166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.373191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.387364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.387381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.387387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.400443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.400460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.400467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.414288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.414306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.414313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.427339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.427357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.427363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.441333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.482 [2024-06-08 21:26:21.441352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.482 [2024-06-08 21:26:21.441363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.482 [2024-06-08 21:26:21.456384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.456407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.456414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.468901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.468919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.468925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.483036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.483054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.483061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.498597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.498615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.498622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.516204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.516221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.516228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.532219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.532238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.548260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.548278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.548284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.483 [2024-06-08 21:26:21.566834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.483 [2024-06-08 21:26:21.566852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.483 [2024-06-08 21:26:21.566858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.744 [2024-06-08 21:26:21.581162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.744 [2024-06-08 21:26:21.581184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.744 [2024-06-08 21:26:21.581191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.744 [2024-06-08 21:26:21.596498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.744 [2024-06-08 21:26:21.596516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.744 [2024-06-08 21:26:21.596522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.612332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.612349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.612356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.630853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.630871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.630877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.644905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.644923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.644930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.663181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.663200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.679042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.679060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.695782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.695800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.695806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.711947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.711965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.711971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.726878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.726896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.726903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.743554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.743573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.759103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.759122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.759128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.774193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.774211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.774218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.787219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.787238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.787244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.804895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.804913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.804920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.818882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.818900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.818906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.745 [2024-06-08 21:26:21.833169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:43.745 [2024-06-08 21:26:21.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.745 [2024-06-08 21:26:21.833194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.850737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.850756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.850766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.866724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.866748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.882525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.882543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.882550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.898107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.898127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.898133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.913832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.913851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.913857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.930796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.930815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.930821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.945648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.945666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.945672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.961337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.961355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.961361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.976934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.976952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.976959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:21.989360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:21.989378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:21.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.003525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.003544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.003550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.017005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.017024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.017030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.033014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.033033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.033039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.045533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.045551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.045557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.063486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.063505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.063511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.078676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.078701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.007 [2024-06-08 21:26:22.094230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.007 [2024-06-08 21:26:22.094249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.007 [2024-06-08 21:26:22.094255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.111505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.111524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.111533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.127589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.127608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.127614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.143030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.143049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.143055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.159208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.159226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.173772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.173791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.173797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.189575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.189594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.189601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.207783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.207803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.207810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.224356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.224375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.240366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.240391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.255623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.255646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.255652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.271278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.271296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.271302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.286800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.286818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.286825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.302637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.302656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.302662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.317938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.317957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.317964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.331412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.331430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.331436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.269 [2024-06-08 21:26:22.347653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.269 [2024-06-08 21:26:22.347671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.269 [2024-06-08 21:26:22.347677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.363834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.363852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.363858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.379848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.379866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.379873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.395756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.395775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.395781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.410983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.411002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.411008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.426496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.426516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.426522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.444093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.444112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.444119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.460042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.460061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.531 [2024-06-08 21:26:22.476068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.531 [2024-06-08 21:26:22.476086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.531 [2024-06-08 21:26:22.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.493267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.493286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.506752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.506771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.506777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.521060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.521079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.521088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.537305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.537323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.552668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.552688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.552694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.569660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.569678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.569684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.586190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.586208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.586215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.602415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.602433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.532 [2024-06-08 21:26:22.620703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.532 [2024-06-08 21:26:22.620721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.532 [2024-06-08 21:26:22.620728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.793 [2024-06-08 21:26:22.635994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.793 [2024-06-08 21:26:22.636013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.793 [2024-06-08 21:26:22.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.793 [2024-06-08 21:26:22.652030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa40000) 00:30:44.793 [2024-06-08 21:26:22.652049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.793 [2024-06-08 21:26:22.652055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.793 00:30:44.793 Latency(us) 00:30:44.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:44.793 nvme0n1 : 2.01 2021.28 252.66 0.00 0.00 7913.57 3549.87 20643.84 00:30:44.793 =================================================================================================================== 00:30:44.793 Total : 2021.28 252.66 0.00 0.00 7913.57 3549.87 20643.84 00:30:44.793 0 00:30:44.793 21:26:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:44.793 21:26:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:44.793 21:26:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:44.793 21:26:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:44.793 | .driver_specific 00:30:44.793 | .nvme_error 00:30:44.793 | .status_code 00:30:44.793 | .command_transient_transport_error' 00:30:44.793 21:26:22 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:30:44.793 21:26:22 -- host/digest.sh@73 -- # killprocess 2569203 00:30:44.793 21:26:22 -- common/autotest_common.sh@926 -- # '[' -z 2569203 ']' 00:30:44.793 21:26:22 -- common/autotest_common.sh@930 -- # kill -0 2569203 00:30:44.793 21:26:22 -- common/autotest_common.sh@931 -- # uname 00:30:44.793 21:26:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:44.793 21:26:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2569203 00:30:44.793 21:26:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:44.793 21:26:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:44.793 21:26:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2569203' 00:30:44.793 killing process with pid 2569203 00:30:44.793 21:26:22 -- common/autotest_common.sh@945 -- # kill 2569203 00:30:44.793 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.793 00:30:44.793 Latency(us) 00:30:44.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.793 =================================================================================================================== 00:30:44.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.793 21:26:22 -- common/autotest_common.sh@950 -- # wait 2569203 00:30:45.055 21:26:22 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:45.055 21:26:22 -- host/digest.sh@54 -- # local rw bs qd 00:30:45.055 21:26:22 -- host/digest.sh@56 -- # rw=randwrite 00:30:45.055 21:26:22 -- host/digest.sh@56 -- # bs=4096 00:30:45.055 21:26:22 -- host/digest.sh@56 -- # qd=128 00:30:45.055 21:26:22 -- host/digest.sh@58 -- # bperfpid=2569887 00:30:45.055 21:26:22 -- host/digest.sh@60 -- # waitforlisten 2569887 /var/tmp/bperf.sock 00:30:45.055 21:26:22 -- common/autotest_common.sh@819 -- # '[' -z 2569887 ']' 00:30:45.055 21:26:22 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:45.055 21:26:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:45.055 21:26:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:45.055 21:26:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:45.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:45.055 21:26:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:45.055 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:30:45.055 [2024-06-08 21:26:23.038450] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:45.055 [2024-06-08 21:26:23.038524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569887 ] 00:30:45.055 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.055 [2024-06-08 21:26:23.112165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.315 [2024-06-08 21:26:23.163745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.885 21:26:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:45.885 21:26:23 -- common/autotest_common.sh@852 -- # return 0 00:30:45.885 21:26:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.885 21:26:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:45.885 21:26:23 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:45.885 21:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.885 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:30:45.885 21:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.885 21:26:23 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.885 21:26:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:46.145 nvme0n1 00:30:46.145 21:26:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:46.145 21:26:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.145 21:26:24 -- common/autotest_common.sh@10 -- # set +x 00:30:46.145 21:26:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.145 21:26:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:46.145 21:26:24 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.406 Running I/O for 2 seconds... 00:30:46.406 [2024-06-08 21:26:24.316280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd208 00:30:46.406 [2024-06-08 21:26:24.317487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.317513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.328739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.329132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.340623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.340969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.340986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.352503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.352905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.352921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.364417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.364832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.364849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.376298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.376716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.376738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.388142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.388521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.388538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.400002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.400409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.411883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.412163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.423739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.424158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.435658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.435952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.406 [2024-06-08 21:26:24.435969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.406 [2024-06-08 21:26:24.447477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.406 [2024-06-08 21:26:24.447770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.407 [2024-06-08 21:26:24.447786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.407 [2024-06-08 21:26:24.459379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.407 [2024-06-08 21:26:24.459671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.407 [2024-06-08 21:26:24.459687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.407 [2024-06-08 21:26:24.471209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.407 [2024-06-08 21:26:24.471493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.407 [2024-06-08 21:26:24.471510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.407 [2024-06-08 21:26:24.483065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.407 [2024-06-08 21:26:24.483344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.407 [2024-06-08 21:26:24.483360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.407 [2024-06-08 21:26:24.494916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.407 [2024-06-08 21:26:24.495195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.407 [2024-06-08 21:26:24.495210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.667 [2024-06-08 21:26:24.506736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.667 [2024-06-08 21:26:24.507135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.667 [2024-06-08 21:26:24.507151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.667 [2024-06-08 21:26:24.518615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.667 [2024-06-08 21:26:24.519020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.667 [2024-06-08 21:26:24.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.667 [2024-06-08 21:26:24.530569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.667 [2024-06-08 21:26:24.531055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.667 [2024-06-08 21:26:24.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.667 [2024-06-08 21:26:24.542379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.667 [2024-06-08 21:26:24.542672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.542688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.554233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.554630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.554646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.566095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.566480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.566496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.577959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.578386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.578408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.589778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.590190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.590207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.601629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.601912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.601929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.613448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.613833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.613849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.625248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.625633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.625650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.637138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.637442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.637459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.649023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.649288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.649305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.660998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.661496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.661512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.672813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.673211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.673228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.684719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.685163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.685182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.696529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.696891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.696907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.708368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.708650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.720216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.720489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.720506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.732025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.732294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.732311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.743849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.744231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.744250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.668 [2024-06-08 21:26:24.755722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.668 [2024-06-08 21:26:24.756166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.668 [2024-06-08 21:26:24.756182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.929 [2024-06-08 21:26:24.767587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.929 [2024-06-08 21:26:24.767961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.929 [2024-06-08 21:26:24.767978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.929 [2024-06-08 21:26:24.779427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.929 [2024-06-08 21:26:24.779810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.929 [2024-06-08 21:26:24.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.929 [2024-06-08 21:26:24.791283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.791557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.803109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.803377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.803393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.814951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.815318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.815334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.826799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.827222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.827238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.838652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.839061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.839077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.850483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.850763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.850779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.862298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.862713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.874152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.874565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.874581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.886038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.886454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.897903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.898181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.898197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.909787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.910051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.910067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.921617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.922039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.922055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.933410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.933780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.933796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.945327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.945636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.957135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.957573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.957590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.968971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.969391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.969411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.980774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.981080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.981096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:24.992574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:24.992950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:24.992970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:25.004422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:25.004893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:25.004909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:46.930 [2024-06-08 21:26:25.016268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:46.930 [2024-06-08 21:26:25.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.930 [2024-06-08 21:26:25.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.191 [2024-06-08 21:26:25.028096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.191 [2024-06-08 21:26:25.028368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.191 [2024-06-08 21:26:25.028384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.191 [2024-06-08 21:26:25.039980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.191 [2024-06-08 21:26:25.040419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.191 [2024-06-08 21:26:25.040435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.191 [2024-06-08 21:26:25.051787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.052173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.052189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.063637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.064003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.064019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.075427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.075857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.087296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.087778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.099121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.099526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.099542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.110968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.111374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.111389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.122789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.123187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.123203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.134614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.135034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.135051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.146478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.146756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.146773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.158342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.158639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.158655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.170194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.170643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.170659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.182018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.182447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.193868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.194287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.194305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.205703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.205989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.206006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.217547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.218021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.218037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.229362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.229806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.229822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.241208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.241507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.253262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.253719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.253736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.265110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.265535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.265552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.192 [2024-06-08 21:26:25.276985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.192 [2024-06-08 21:26:25.277389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.192 [2024-06-08 21:26:25.277410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.288829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.289254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.300681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.301093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.301108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.312527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.312958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.312974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.324321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.324608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.336221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.336614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.336630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.348047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.348451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.348467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.359928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.453 [2024-06-08 21:26:25.360388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.453 [2024-06-08 21:26:25.371757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.453 [2024-06-08 21:26:25.372170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.372186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.383585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.384015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.384031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.395445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.395880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.407332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.407727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.407743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.419175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.419591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.419607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.431044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.431422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.431439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.442884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.443164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.443180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.454763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.455233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.455251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.466690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.467083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.478561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.478837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.490370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.490814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.490831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.502370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.502787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.502806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.514222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.514484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.514500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.526072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.526349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.526365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.454 [2024-06-08 21:26:25.537904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.454 [2024-06-08 21:26:25.538180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.454 [2024-06-08 21:26:25.538196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.549764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.550182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.550199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.561654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.561926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.561942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.573492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.573855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.573872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.585342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.585621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.585638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.597218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.597525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.597542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.609111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.609620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.609637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.621002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.621277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.621294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.632844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.633255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.633271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.644695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.645103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.645119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.656567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.656903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.656920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.668513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.668797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.680359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.680744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.692202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.692647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.692663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.704024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.704426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.715917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.716226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.716242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.727744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.728016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.728033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.739588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.739995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.740011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.751418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.751895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.751911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.763321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.763656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.763673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.716 [2024-06-08 21:26:25.775136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.716 [2024-06-08 21:26:25.775421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.716 [2024-06-08 21:26:25.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.717 [2024-06-08 21:26:25.787014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.717 [2024-06-08 21:26:25.787289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.717 [2024-06-08 21:26:25.787306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.717 [2024-06-08 21:26:25.798834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.717 [2024-06-08 21:26:25.799120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.717 [2024-06-08 21:26:25.799136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.810750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.811069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.811088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.822589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.822992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.823008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.834448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.834753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.834770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.846385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.846824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.846841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.858216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.858635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.858651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.870074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.870489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.870506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.881962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.882242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.882258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.893820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.894087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.894104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.905667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.905970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.905987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.917635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.918004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.918020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.929502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.929923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.929939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.941351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.941734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.941750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.953189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.953586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.953602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.965093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.965541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.965557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.976901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.977322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.977338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:25.988767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:25.989188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:25.989204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:26.000618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:26.001018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.978 [2024-06-08 21:26:26.001034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.978 [2024-06-08 21:26:26.012467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.978 [2024-06-08 21:26:26.012899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.979 [2024-06-08 21:26:26.012916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.979 [2024-06-08 21:26:26.024280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.979 [2024-06-08 21:26:26.024711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.979 [2024-06-08 21:26:26.024728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.979 [2024-06-08 21:26:26.036092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.979 [2024-06-08 21:26:26.036517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.979 [2024-06-08 21:26:26.036533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.979 [2024-06-08 21:26:26.047998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.979 [2024-06-08 21:26:26.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.979 [2024-06-08 21:26:26.048302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.979 [2024-06-08 21:26:26.059827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:47.979 [2024-06-08 21:26:26.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.979 [2024-06-08 21:26:26.060259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.071647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.072041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.072057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.083486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.083853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.083869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.095326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.095832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.095848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.107183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.107595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.107612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.119033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.119302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.119321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.130841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.131226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.131242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.142647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.143080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.154488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.154865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.154881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.240 [2024-06-08 21:26:26.166417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.240 [2024-06-08 21:26:26.166800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.240 [2024-06-08 21:26:26.166816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.178205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.178493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.178509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.190103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.190385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.190404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.201965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.202393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.213880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.214316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.214333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.225812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.226209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.226225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.237628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.237968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.237983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.249641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.250017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.250033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.261455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.261937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.261953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.273313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.273787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.273804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.285127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.285525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 [2024-06-08 21:26:26.296969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8770) with pdu=0x2000190fd640 00:30:48.241 [2024-06-08 21:26:26.297236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.241 [2024-06-08 21:26:26.297252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:48.241 00:30:48.241 Latency(us) 00:30:48.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.241 nvme0n1 : 2.01 21436.02 83.73 0.00 0.00 5960.71 4833.28 16711.68 00:30:48.241 =================================================================================================================== 00:30:48.241 Total : 21436.02 83.73 0.00 0.00 5960.71 4833.28 16711.68 00:30:48.241 0 00:30:48.241 21:26:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:48.241 21:26:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:48.241 21:26:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:48.241 21:26:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:48.241 | .driver_specific 00:30:48.241 | .nvme_error 00:30:48.241 | .status_code 00:30:48.241 | .command_transient_transport_error' 00:30:48.501 21:26:26 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:30:48.501 21:26:26 -- host/digest.sh@73 -- # killprocess 2569887 00:30:48.501 21:26:26 -- common/autotest_common.sh@926 -- # '[' -z 2569887 ']' 00:30:48.501 21:26:26 -- common/autotest_common.sh@930 -- # kill -0 2569887 00:30:48.501 21:26:26 -- common/autotest_common.sh@931 -- # uname 00:30:48.501 21:26:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:48.501 21:26:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2569887 00:30:48.501 21:26:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:48.501 21:26:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:48.501 21:26:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2569887' 00:30:48.501 killing process with pid 2569887 00:30:48.501 21:26:26 -- common/autotest_common.sh@945 -- # kill 2569887 00:30:48.501 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.501 00:30:48.501 Latency(us) 00:30:48.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.501 =================================================================================================================== 00:30:48.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.501 21:26:26 -- common/autotest_common.sh@950 -- # wait 2569887 00:30:48.761 21:26:26 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:48.761 21:26:26 -- host/digest.sh@54 -- # local rw bs qd 00:30:48.761 21:26:26 -- host/digest.sh@56 -- # rw=randwrite 00:30:48.761 21:26:26 -- host/digest.sh@56 -- # bs=131072 00:30:48.761 21:26:26 -- host/digest.sh@56 -- # qd=16 00:30:48.761 21:26:26 -- host/digest.sh@58 -- # bperfpid=2570580 00:30:48.761 21:26:26 -- host/digest.sh@60 -- # waitforlisten 2570580 /var/tmp/bperf.sock 00:30:48.761 21:26:26 -- common/autotest_common.sh@819 -- # '[' -z 2570580 ']' 00:30:48.761 21:26:26 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:48.761 21:26:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.761 21:26:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:48.761 21:26:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.761 21:26:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:48.761 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:30:48.761 [2024-06-08 21:26:26.677825] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:48.761 [2024-06-08 21:26:26.677876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570580 ] 00:30:48.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:48.761 Zero copy mechanism will not be used. 00:30:48.761 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.761 [2024-06-08 21:26:26.750861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.761 [2024-06-08 21:26:26.800854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.702 21:26:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:49.702 21:26:27 -- common/autotest_common.sh@852 -- # return 0 00:30:49.702 21:26:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.702 21:26:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.702 21:26:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:49.702 21:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.702 21:26:27 -- common/autotest_common.sh@10 -- # set +x 00:30:49.702 21:26:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.702 21:26:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.702 21:26:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.961 nvme0n1 00:30:49.961 21:26:27 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:49.961 21:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.961 21:26:27 -- common/autotest_common.sh@10 -- # set +x 00:30:49.961 21:26:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.961 21:26:27 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:49.961 21:26:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:49.961 Zero copy mechanism will not be used. 00:30:49.961 Running I/O for 2 seconds... 00:30:49.961 [2024-06-08 21:26:27.972198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:27.972325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:27.972353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:27.983487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:27.983742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:27.983759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:27.993996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:27.994144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:27.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:28.004958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:28.005105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:28.005120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:28.013770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:28.013846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:28.013861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:28.021849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:28.021983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.961 [2024-06-08 21:26:28.021998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.961 [2024-06-08 21:26:28.031480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.961 [2024-06-08 21:26:28.031614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.962 [2024-06-08 21:26:28.031633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.962 [2024-06-08 21:26:28.041584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.962 [2024-06-08 21:26:28.041727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.962 [2024-06-08 21:26:28.041742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.962 [2024-06-08 21:26:28.051043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:49.962 [2024-06-08 21:26:28.051294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.962 [2024-06-08 21:26:28.051310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.060699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.060882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.060897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.069882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.070019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.070034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.079227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.079323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.079338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.089095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.089248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.089263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.097333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.097461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.097476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.104687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.104848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.104863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.112565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.112714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.112729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.121209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.121368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.121383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.129378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.222 [2024-06-08 21:26:28.129583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.222 [2024-06-08 21:26:28.129599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.222 [2024-06-08 21:26:28.137362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.137601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.137617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.145897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.146019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.146034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.153926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.154183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.161541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.161655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.161670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.169571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.169667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.169682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.179636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.179872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.179887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.189018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.189255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.189271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.199170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.199301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.199316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.208202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.208339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.218717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.218852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.218866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.228197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.228336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.228351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.237869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.238027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.238042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.247840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.247915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.247930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.257758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.258022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.258037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.267726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.267940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.277577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.277743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.277758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.286907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.287134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.287149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.294903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.295031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.295046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.223 [2024-06-08 21:26:28.303283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.223 [2024-06-08 21:26:28.303400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.223 [2024-06-08 21:26:28.303419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.313502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.313751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.313766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.322214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.322399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.322420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.330599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.330858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.330874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.338786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.339100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.339117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.347283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.347382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.347397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.354785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.354870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-06-08 21:26:28.354885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.484 [2024-06-08 21:26:28.362556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.484 [2024-06-08 21:26:28.362724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.362739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.371181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.371356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.371371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.379373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.379619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.379635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.387971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.388139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.388154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.395573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.395838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.395855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.404671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.404840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.404856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.415378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.415521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.415536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.424547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.434517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.434793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.434816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.444454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.444623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.444638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.454204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.454395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.454416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.463592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.463692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.463707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.473527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.473716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.473731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.483295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.483453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.483469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.492595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.492755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.502238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.502425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.502445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.511631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.511748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.511763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.521078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.521202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.521217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.530727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.530889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.541256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.541383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.541398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.550817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.559500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.559773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.559789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.485 [2024-06-08 21:26:28.569541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.485 [2024-06-08 21:26:28.569698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-06-08 21:26:28.569713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.746 [2024-06-08 21:26:28.577226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.746 [2024-06-08 21:26:28.577397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.746 [2024-06-08 21:26:28.577416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.746 [2024-06-08 21:26:28.584883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.746 [2024-06-08 21:26:28.585069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.746 [2024-06-08 21:26:28.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.592525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.592658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.592673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.599154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.599305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.607539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.607692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.607707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.614441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.614644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.614660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.622514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.622768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.622785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.629945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.630169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.630184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.638032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.638464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.638479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.647056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.647238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.647256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.655044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.655171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.655187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.662749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.662887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.668048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.668159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.668175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.674965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.675178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.675193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.685696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.685818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.695744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.695961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.695976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.704324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.704462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.704478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.711919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.712061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.720762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.720932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.720947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.728630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.728774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.728789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.737766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.737978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.737993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.747932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.748035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.748050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.758082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.758241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.758256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.768011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.747 [2024-06-08 21:26:28.768169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.747 [2024-06-08 21:26:28.768183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.747 [2024-06-08 21:26:28.777094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.748 [2024-06-08 21:26:28.787369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.787475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.787490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.748 [2024-06-08 21:26:28.799056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.799400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.799419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.748 [2024-06-08 21:26:28.809997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.810133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.748 [2024-06-08 21:26:28.821774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.822006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.822021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.748 [2024-06-08 21:26:28.832465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:50.748 [2024-06-08 21:26:28.832665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.748 [2024-06-08 21:26:28.832680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.843980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.844209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.844224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.854038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.854263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.854278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.865142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.865439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.865454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.875390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.875703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.875718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.887146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.887540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.887556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.898309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.898698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.898717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.909060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.909178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.920782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.920945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.930728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.930992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.931008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.938946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.939194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.939215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.009 [2024-06-08 21:26:28.947447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.009 [2024-06-08 21:26:28.947556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.009 [2024-06-08 21:26:28.947571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.955004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.955192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.955207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.963191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.963306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.963322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.969687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.969898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.978439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.978618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.978633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.986491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.986691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.986706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:28.994940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:28.995120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:28.995135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.002261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.002439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.002454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.010822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.010954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.018333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.018554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.018569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.027264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.027600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.027616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.036661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.036854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.044890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.045127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.045141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.053880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.054106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.054122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.060656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.060821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.060835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.069129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.069284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.077272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.077532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.077547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.087381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.087520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.087535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.010 [2024-06-08 21:26:29.096755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.010 [2024-06-08 21:26:29.096904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.010 [2024-06-08 21:26:29.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.271 [2024-06-08 21:26:29.106445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.271 [2024-06-08 21:26:29.106664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.271 [2024-06-08 21:26:29.106679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.271 [2024-06-08 21:26:29.115466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.271 [2024-06-08 21:26:29.115705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.271 [2024-06-08 21:26:29.115721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.271 [2024-06-08 21:26:29.125910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.271 [2024-06-08 21:26:29.126036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.271 [2024-06-08 21:26:29.126055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.271 [2024-06-08 21:26:29.135128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.271 [2024-06-08 21:26:29.135253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.135268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.144107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.144220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.144235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.153477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.153712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.162550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.162669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.162684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.172133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.172247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.179976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.180247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.180263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.187952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.188190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.188205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.195869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.196218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.196234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.204238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.204498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.204514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.213046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.213432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.213448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.224140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.224351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.224366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.233413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.233791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.233806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.242562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.242673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.253078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.253223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.253238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.262080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.262208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.262223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.271701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.271817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.271832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.280493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.280745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.280764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.288528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.288674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.288689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.298392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.298475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.298490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.305818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.305963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.305978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.314356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.314593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.314607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.325683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.325830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.325845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.335812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.336218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.272 [2024-06-08 21:26:29.336233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.272 [2024-06-08 21:26:29.343800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.272 [2024-06-08 21:26:29.344066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.273 [2024-06-08 21:26:29.344082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.273 [2024-06-08 21:26:29.351898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.273 [2024-06-08 21:26:29.352106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.273 [2024-06-08 21:26:29.352120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.273 [2024-06-08 21:26:29.359823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.273 [2024-06-08 21:26:29.360105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.273 [2024-06-08 21:26:29.360120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.366979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.367136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.367151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.374592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.374884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.374899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.382212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.382420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.382435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.390470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.390577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.390592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.398496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.398607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.398621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.406553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.406714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.406729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.415118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.415271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.423885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.424023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.424038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.432843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.433028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.441792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.441901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.441916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.450611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.450789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.450803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.459433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.459741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.459756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.469160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.469460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.478849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.478968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.478983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.488102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.488206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.488221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.496875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.496995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.497010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.504512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.504898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.504916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.514239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.514345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.534 [2024-06-08 21:26:29.514360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.534 [2024-06-08 21:26:29.523055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.534 [2024-06-08 21:26:29.523261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.523275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.532372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.532517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.540727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.540863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.540878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.549399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.549779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.549794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.557516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.557727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.557742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.567678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.568013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.568029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.576446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.576608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.585331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.585618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.596359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.596595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.596619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.607332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.607593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.607609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.535 [2024-06-08 21:26:29.616843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.535 [2024-06-08 21:26:29.617029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.535 [2024-06-08 21:26:29.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.799 [2024-06-08 21:26:29.627805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.799 [2024-06-08 21:26:29.628016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.799 [2024-06-08 21:26:29.628031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.799 [2024-06-08 21:26:29.638732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.799 [2024-06-08 21:26:29.638852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.799 [2024-06-08 21:26:29.638867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.799 [2024-06-08 21:26:29.650642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.799 [2024-06-08 21:26:29.650880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.799 [2024-06-08 21:26:29.650894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.799 [2024-06-08 21:26:29.661410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.799 [2024-06-08 21:26:29.661599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.799 [2024-06-08 21:26:29.661615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.672125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.672410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.672426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.682893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.683157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.694367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.694616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.694631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.702534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.702688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.709804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.710034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.717842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.718069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.718084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.725700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.725991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.726006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.732773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.732961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.732976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.740469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.740758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.740774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.747482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.747781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.755026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.755253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.755267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.763712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.763891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.763906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.770845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.771082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.771097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.778816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.779038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.779053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.786606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.786795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.786810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.793281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.793382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.793397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.801072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.801234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.801249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.810377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.810604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.810618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.817667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.817852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.817867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.825814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.826053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.826069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.834398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.834638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.834653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.842372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.800 [2024-06-08 21:26:29.842674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.800 [2024-06-08 21:26:29.842690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.800 [2024-06-08 21:26:29.850148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.801 [2024-06-08 21:26:29.850422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.801 [2024-06-08 21:26:29.850437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.801 [2024-06-08 21:26:29.858494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.801 [2024-06-08 21:26:29.858661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.801 [2024-06-08 21:26:29.858676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.801 [2024-06-08 21:26:29.866088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.801 [2024-06-08 21:26:29.866211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.801 [2024-06-08 21:26:29.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.801 [2024-06-08 21:26:29.874114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.801 [2024-06-08 21:26:29.874366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.801 [2024-06-08 21:26:29.874382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.801 [2024-06-08 21:26:29.882174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:51.801 [2024-06-08 21:26:29.882337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.801 [2024-06-08 21:26:29.882352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.105 [2024-06-08 21:26:29.890454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.105 [2024-06-08 21:26:29.890767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.105 [2024-06-08 21:26:29.890783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.105 [2024-06-08 21:26:29.899222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.105 [2024-06-08 21:26:29.899495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.105 [2024-06-08 21:26:29.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.105 [2024-06-08 21:26:29.907585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.105 [2024-06-08 21:26:29.907966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.907982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.915895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.916137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.923800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.924043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.924058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.932067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.932256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.932271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.940212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.940464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.940479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.948538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.948763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.948778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:52.106 [2024-06-08 21:26:29.957667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b8910) with pdu=0x2000190fef90 00:30:52.106 [2024-06-08 21:26:29.957847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.106 [2024-06-08 21:26:29.957865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:52.106 00:30:52.106 Latency(us) 00:30:52.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:52.106 nvme0n1 : 2.00 3459.56 432.44 0.00 0.00 4617.45 1727.15 12233.39 00:30:52.106 =================================================================================================================== 00:30:52.106 Total : 3459.56 432.44 0.00 0.00 4617.45 1727.15 12233.39 00:30:52.106 0 00:30:52.106 21:26:29 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:52.106 21:26:29 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:52.106 21:26:29 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:52.106 | .driver_specific 00:30:52.106 | .nvme_error 00:30:52.106 | .status_code 00:30:52.106 | .command_transient_transport_error' 00:30:52.106 21:26:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:52.106 21:26:30 -- host/digest.sh@71 -- # (( 223 > 0 )) 00:30:52.106 21:26:30 -- host/digest.sh@73 -- # killprocess 2570580 00:30:52.106 21:26:30 -- common/autotest_common.sh@926 -- # '[' -z 2570580 ']' 00:30:52.106 21:26:30 -- common/autotest_common.sh@930 -- # kill -0 2570580 00:30:52.106 21:26:30 -- common/autotest_common.sh@931 -- # uname 00:30:52.106 21:26:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.106 21:26:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2570580 00:30:52.366 21:26:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:52.366 21:26:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:52.366 21:26:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2570580' 00:30:52.366 killing process with pid 2570580 00:30:52.366 21:26:30 -- common/autotest_common.sh@945 -- # kill 2570580 00:30:52.366 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.366 00:30:52.366 Latency(us) 00:30:52.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.366 =================================================================================================================== 00:30:52.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.366 21:26:30 -- common/autotest_common.sh@950 -- # wait 2570580 00:30:52.366 21:26:30 -- host/digest.sh@115 -- # killprocess 2568155 00:30:52.366 21:26:30 -- common/autotest_common.sh@926 -- # '[' -z 2568155 ']' 00:30:52.366 21:26:30 -- common/autotest_common.sh@930 -- # kill -0 2568155 00:30:52.366 21:26:30 -- common/autotest_common.sh@931 -- # uname 00:30:52.366 21:26:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.366 21:26:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2568155 00:30:52.366 21:26:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:52.366 21:26:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:52.366 21:26:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2568155' 00:30:52.366 killing process with pid 2568155 00:30:52.366 21:26:30 -- common/autotest_common.sh@945 -- # kill 2568155 00:30:52.366 21:26:30 -- common/autotest_common.sh@950 -- # wait 2568155 00:30:52.627 00:30:52.627 real 0m15.915s 00:30:52.627 user 0m31.209s 00:30:52.627 sys 0m3.174s 00:30:52.627 21:26:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.627 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:30:52.627 ************************************ 00:30:52.627 END TEST nvmf_digest_error 00:30:52.627 ************************************ 00:30:52.627 21:26:30 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:52.627 21:26:30 -- host/digest.sh@139 -- # nvmftestfini 00:30:52.627 21:26:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:52.627 21:26:30 -- nvmf/common.sh@116 -- # sync 00:30:52.627 21:26:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:52.627 21:26:30 -- nvmf/common.sh@119 -- # set +e 00:30:52.627 21:26:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:52.627 21:26:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:52.627 rmmod nvme_tcp 00:30:52.627 rmmod nvme_fabrics 00:30:52.627 rmmod nvme_keyring 00:30:52.627 21:26:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:52.627 21:26:30 -- nvmf/common.sh@123 -- # set -e 00:30:52.627 21:26:30 -- nvmf/common.sh@124 -- # return 0 00:30:52.627 21:26:30 -- nvmf/common.sh@477 -- # '[' -n 2568155 ']' 00:30:52.627 21:26:30 -- nvmf/common.sh@478 -- # killprocess 2568155 00:30:52.627 21:26:30 -- common/autotest_common.sh@926 -- # '[' -z 2568155 ']' 00:30:52.627 21:26:30 -- common/autotest_common.sh@930 -- # kill -0 2568155 00:30:52.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2568155) - No such process 00:30:52.627 21:26:30 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2568155 is not found' 00:30:52.627 Process with pid 2568155 is not found 00:30:52.627 21:26:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:52.627 21:26:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:52.627 21:26:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:52.627 21:26:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.627 21:26:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:52.627 21:26:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.627 21:26:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.627 21:26:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.173 21:26:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:55.173 00:30:55.173 real 0m41.200s 00:30:55.173 user 1m4.406s 00:30:55.173 sys 0m11.647s 00:30:55.173 21:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.173 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 ************************************ 00:30:55.173 END TEST nvmf_digest 00:30:55.173 ************************************ 00:30:55.173 21:26:32 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:30:55.173 21:26:32 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:30:55.173 21:26:32 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:30:55.173 21:26:32 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:55.173 21:26:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:55.173 21:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.173 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 ************************************ 00:30:55.173 START TEST nvmf_bdevperf 00:30:55.173 ************************************ 00:30:55.173 21:26:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:55.173 * Looking for test storage... 00:30:55.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.173 21:26:32 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.173 21:26:32 -- nvmf/common.sh@7 -- # uname -s 00:30:55.173 21:26:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.173 21:26:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.173 21:26:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.173 21:26:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.173 21:26:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.173 21:26:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.173 21:26:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.173 21:26:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.173 21:26:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.173 21:26:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.173 21:26:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:55.173 21:26:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:55.173 21:26:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.173 21:26:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.173 21:26:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.173 21:26:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.173 21:26:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.173 21:26:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.173 21:26:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.173 21:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.173 21:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.173 21:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.173 21:26:32 -- paths/export.sh@5 -- # export PATH 00:30:55.173 21:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.173 21:26:32 -- nvmf/common.sh@46 -- # : 0 00:30:55.173 21:26:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:55.173 21:26:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:55.173 21:26:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:55.173 21:26:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.173 21:26:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.173 21:26:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:55.173 21:26:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:55.173 21:26:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:55.173 21:26:32 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.173 21:26:32 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.173 21:26:32 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:55.173 21:26:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:55.173 21:26:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.173 21:26:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:55.173 21:26:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:55.173 21:26:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:55.173 21:26:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.173 21:26:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.173 21:26:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.173 21:26:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:55.173 21:26:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:55.173 21:26:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:55.173 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:31:01.764 21:26:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:01.764 21:26:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:01.764 21:26:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:01.764 21:26:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:01.764 21:26:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:01.764 21:26:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:01.764 21:26:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:01.764 21:26:39 -- nvmf/common.sh@294 -- # net_devs=() 00:31:01.764 21:26:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:01.764 21:26:39 -- nvmf/common.sh@295 -- # e810=() 00:31:01.764 21:26:39 -- nvmf/common.sh@295 -- # local -ga e810 00:31:01.764 21:26:39 -- nvmf/common.sh@296 -- # x722=() 00:31:01.764 21:26:39 -- nvmf/common.sh@296 -- # local -ga x722 00:31:01.764 21:26:39 -- nvmf/common.sh@297 -- # mlx=() 00:31:01.764 21:26:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:01.764 21:26:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.764 21:26:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:01.764 21:26:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:01.764 21:26:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.764 21:26:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:01.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:01.764 21:26:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.764 21:26:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:01.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:01.764 21:26:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.764 21:26:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.764 21:26:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.764 21:26:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:01.764 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:01.764 21:26:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.764 21:26:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.764 21:26:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.764 21:26:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.764 21:26:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:01.764 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:01.764 21:26:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.764 21:26:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:01.764 21:26:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:01.764 21:26:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:01.764 21:26:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.764 21:26:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.764 21:26:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.764 21:26:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:01.764 21:26:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.764 21:26:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.764 21:26:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:01.764 21:26:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.764 21:26:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.764 21:26:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:01.764 21:26:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:01.764 21:26:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.764 21:26:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.764 21:26:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.764 21:26:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.764 21:26:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:01.764 21:26:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.764 21:26:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.764 21:26:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.764 21:26:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:02.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:31:02.025 00:31:02.025 --- 10.0.0.2 ping statistics --- 00:31:02.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.025 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:31:02.025 21:26:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:31:02.025 00:31:02.025 --- 10.0.0.1 ping statistics --- 00:31:02.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.025 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:31:02.025 21:26:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.025 21:26:39 -- nvmf/common.sh@410 -- # return 0 00:31:02.025 21:26:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:02.025 21:26:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.025 21:26:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:02.025 21:26:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:02.025 21:26:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.025 21:26:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:02.025 21:26:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:02.025 21:26:39 -- host/bdevperf.sh@25 -- # tgt_init 00:31:02.025 21:26:39 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:02.025 21:26:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:02.025 21:26:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:02.025 21:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:02.025 21:26:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:02.025 21:26:39 -- nvmf/common.sh@469 -- # nvmfpid=2575394 00:31:02.025 21:26:39 -- nvmf/common.sh@470 -- # waitforlisten 2575394 00:31:02.025 21:26:39 -- common/autotest_common.sh@819 -- # '[' -z 2575394 ']' 00:31:02.025 21:26:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.025 21:26:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.025 21:26:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.025 21:26:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.025 21:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:02.025 [2024-06-08 21:26:39.950100] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:02.025 [2024-06-08 21:26:39.950162] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.025 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.025 [2024-06-08 21:26:40.035828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:02.025 [2024-06-08 21:26:40.103842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:02.025 [2024-06-08 21:26:40.103968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.025 [2024-06-08 21:26:40.103977] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.025 [2024-06-08 21:26:40.103984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.025 [2024-06-08 21:26:40.104132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.025 [2024-06-08 21:26:40.104288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.025 [2024-06-08 21:26:40.104289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.964 21:26:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.964 21:26:40 -- common/autotest_common.sh@852 -- # return 0 00:31:02.964 21:26:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:02.964 21:26:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 21:26:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.964 21:26:40 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.964 21:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 [2024-06-08 21:26:40.772328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.964 21:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.964 21:26:40 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:02.964 21:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 Malloc0 00:31:02.964 21:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.964 21:26:40 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.964 21:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 21:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.964 21:26:40 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:02.964 21:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 21:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.964 21:26:40 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.964 21:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.964 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 [2024-06-08 21:26:40.842670] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.964 21:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.964 21:26:40 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:02.964 21:26:40 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:02.964 21:26:40 -- nvmf/common.sh@520 -- # config=() 00:31:02.964 21:26:40 -- nvmf/common.sh@520 -- # local subsystem config 00:31:02.964 21:26:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:02.964 21:26:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:02.964 { 00:31:02.964 "params": { 00:31:02.964 "name": "Nvme$subsystem", 00:31:02.964 "trtype": "$TEST_TRANSPORT", 00:31:02.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.964 "adrfam": "ipv4", 00:31:02.964 "trsvcid": "$NVMF_PORT", 00:31:02.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.964 "hdgst": ${hdgst:-false}, 00:31:02.964 "ddgst": ${ddgst:-false} 00:31:02.964 }, 00:31:02.964 "method": "bdev_nvme_attach_controller" 00:31:02.964 } 00:31:02.964 EOF 00:31:02.964 )") 00:31:02.964 21:26:40 -- nvmf/common.sh@542 -- # cat 00:31:02.964 21:26:40 -- nvmf/common.sh@544 -- # jq . 00:31:02.964 21:26:40 -- nvmf/common.sh@545 -- # IFS=, 00:31:02.964 21:26:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:02.964 "params": { 00:31:02.964 "name": "Nvme1", 00:31:02.964 "trtype": "tcp", 00:31:02.964 "traddr": "10.0.0.2", 00:31:02.964 "adrfam": "ipv4", 00:31:02.964 "trsvcid": "4420", 00:31:02.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:02.965 "hdgst": false, 00:31:02.965 "ddgst": false 00:31:02.965 }, 00:31:02.965 "method": "bdev_nvme_attach_controller" 00:31:02.965 }' 00:31:02.965 [2024-06-08 21:26:40.893767] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:02.965 [2024-06-08 21:26:40.893813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575664 ] 00:31:02.965 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.965 [2024-06-08 21:26:40.950931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.965 [2024-06-08 21:26:41.013934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.534 Running I/O for 1 seconds... 00:31:04.477 00:31:04.477 Latency(us) 00:31:04.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.477 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:04.477 Verification LBA range: start 0x0 length 0x4000 00:31:04.477 Nvme1n1 : 1.01 13967.65 54.56 0.00 0.00 9122.73 1351.68 13598.72 00:31:04.477 =================================================================================================================== 00:31:04.477 Total : 13967.65 54.56 0.00 0.00 9122.73 1351.68 13598.72 00:31:04.477 21:26:42 -- host/bdevperf.sh@30 -- # bdevperfpid=2576002 00:31:04.477 21:26:42 -- host/bdevperf.sh@32 -- # sleep 3 00:31:04.477 21:26:42 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:04.477 21:26:42 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:04.477 21:26:42 -- nvmf/common.sh@520 -- # config=() 00:31:04.477 21:26:42 -- nvmf/common.sh@520 -- # local subsystem config 00:31:04.477 21:26:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:04.477 21:26:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:04.477 { 00:31:04.477 "params": { 00:31:04.477 "name": "Nvme$subsystem", 00:31:04.477 "trtype": "$TEST_TRANSPORT", 00:31:04.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.477 "adrfam": "ipv4", 00:31:04.477 "trsvcid": "$NVMF_PORT", 00:31:04.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.477 "hdgst": ${hdgst:-false}, 00:31:04.477 "ddgst": ${ddgst:-false} 00:31:04.477 }, 00:31:04.477 "method": "bdev_nvme_attach_controller" 00:31:04.477 } 00:31:04.477 EOF 00:31:04.477 )") 00:31:04.477 21:26:42 -- nvmf/common.sh@542 -- # cat 00:31:04.477 21:26:42 -- nvmf/common.sh@544 -- # jq . 00:31:04.477 21:26:42 -- nvmf/common.sh@545 -- # IFS=, 00:31:04.477 21:26:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:04.477 "params": { 00:31:04.477 "name": "Nvme1", 00:31:04.477 "trtype": "tcp", 00:31:04.477 "traddr": "10.0.0.2", 00:31:04.477 "adrfam": "ipv4", 00:31:04.477 "trsvcid": "4420", 00:31:04.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:04.477 "hdgst": false, 00:31:04.477 "ddgst": false 00:31:04.477 }, 00:31:04.477 "method": "bdev_nvme_attach_controller" 00:31:04.477 }' 00:31:04.477 [2024-06-08 21:26:42.501810] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:04.477 [2024-06-08 21:26:42.501862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576002 ] 00:31:04.477 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.477 [2024-06-08 21:26:42.560338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.737 [2024-06-08 21:26:42.622794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.998 Running I/O for 15 seconds... 00:31:07.544 21:26:45 -- host/bdevperf.sh@33 -- # kill -9 2575394 00:31:07.544 21:26:45 -- host/bdevperf.sh@35 -- # sleep 3 00:31:07.544 [2024-06-08 21:26:45.470755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.470980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.470991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-06-08 21:26:45.471203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-06-08 21:26:45.471213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-06-08 21:26:45.471649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-06-08 21:26:45.471692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-06-08 21:26:45.471699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.471926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.471984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-06-08 21:26:45.471991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-06-08 21:26:45.472210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.546 [2024-06-08 21:26:45.472217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.547 [2024-06-08 21:26:45.472789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-06-08 21:26:45.472816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.547 [2024-06-08 21:26:45.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.548 [2024-06-08 21:26:45.472887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.548 [2024-06-08 21:26:45.472987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.472996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16afab0 is same with the state(5) to be set 00:31:07.548 [2024-06-08 21:26:45.473005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-06-08 21:26:45.473013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-06-08 21:26:45.473019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-06-08 21:26:45.473026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-06-08 21:26:45.473064] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16afab0 was disconnected and freed. reset controller. 00:31:07.548 [2024-06-08 21:26:45.475667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.548 [2024-06-08 21:26:45.475714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.548 [2024-06-08 21:26:45.476379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.476715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.476752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.548 [2024-06-08 21:26:45.476763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.548 [2024-06-08 21:26:45.476910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.548 [2024-06-08 21:26:45.477039] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.548 [2024-06-08 21:26:45.477047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.548 [2024-06-08 21:26:45.477056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.548 [2024-06-08 21:26:45.479433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.548 [2024-06-08 21:26:45.488224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.548 [2024-06-08 21:26:45.488897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.489338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.489351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.548 [2024-06-08 21:26:45.489361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.548 [2024-06-08 21:26:45.489531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.548 [2024-06-08 21:26:45.489678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.548 [2024-06-08 21:26:45.489687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.548 [2024-06-08 21:26:45.489695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.548 [2024-06-08 21:26:45.491905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.548 [2024-06-08 21:26:45.500844] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.548 [2024-06-08 21:26:45.501421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.501969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.502006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.548 [2024-06-08 21:26:45.502017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.548 [2024-06-08 21:26:45.502199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.548 [2024-06-08 21:26:45.502388] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.548 [2024-06-08 21:26:45.502397] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.548 [2024-06-08 21:26:45.502412] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.548 [2024-06-08 21:26:45.504670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.548 [2024-06-08 21:26:45.513352] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.548 [2024-06-08 21:26:45.514001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.514440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.548 [2024-06-08 21:26:45.514459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.548 [2024-06-08 21:26:45.514468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.548 [2024-06-08 21:26:45.514579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.548 [2024-06-08 21:26:45.514741] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.548 [2024-06-08 21:26:45.514750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.548 [2024-06-08 21:26:45.514756] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.548 [2024-06-08 21:26:45.517004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.548 [2024-06-08 21:26:45.525710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.526280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.526681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.526717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.526728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.526909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.527001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.527010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.527017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.529359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.538302] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.538766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.539231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.539241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.539249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.539415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.539560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.539572] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.539579] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.541817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.550782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.551380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.551840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.551850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.551857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.552019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.552144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.552152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.552159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.554492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.563109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.563762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.564211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.564223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.564232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.564438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.564586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.564594] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.564601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.566883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.575609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.576212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.576724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.576761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.576771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.576935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.577063] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.577071] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.577083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.579412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.588248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.588906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.589334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-06-08 21:26:45.589343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-06-08 21:26:45.589351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.549 [2024-06-08 21:26:45.589464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.549 [2024-06-08 21:26:45.589607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-06-08 21:26:45.589614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-06-08 21:26:45.589621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-06-08 21:26:45.591766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-06-08 21:26:45.600659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-06-08 21:26:45.601233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.601748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.601784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.550 [2024-06-08 21:26:45.601794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.550 [2024-06-08 21:26:45.601939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.550 [2024-06-08 21:26:45.602067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.550 [2024-06-08 21:26:45.602075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.550 [2024-06-08 21:26:45.602083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.550 [2024-06-08 21:26:45.604408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.550 [2024-06-08 21:26:45.613241] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.550 [2024-06-08 21:26:45.613752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.614177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.614187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.550 [2024-06-08 21:26:45.614194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.550 [2024-06-08 21:26:45.614356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.550 [2024-06-08 21:26:45.614525] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.550 [2024-06-08 21:26:45.614533] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.550 [2024-06-08 21:26:45.614540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.550 [2024-06-08 21:26:45.616783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.550 [2024-06-08 21:26:45.625542] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.550 [2024-06-08 21:26:45.626106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.626640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.550 [2024-06-08 21:26:45.626676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.550 [2024-06-08 21:26:45.626687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.550 [2024-06-08 21:26:45.626851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.550 [2024-06-08 21:26:45.626979] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.550 [2024-06-08 21:26:45.626987] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.550 [2024-06-08 21:26:45.626995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.550 [2024-06-08 21:26:45.629529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.812 [2024-06-08 21:26:45.637999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.812 [2024-06-08 21:26:45.638712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.812 [2024-06-08 21:26:45.639233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.812 [2024-06-08 21:26:45.639246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.812 [2024-06-08 21:26:45.639255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.812 [2024-06-08 21:26:45.639400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.812 [2024-06-08 21:26:45.639517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.812 [2024-06-08 21:26:45.639525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.812 [2024-06-08 21:26:45.639533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.812 [2024-06-08 21:26:45.641785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.812 [2024-06-08 21:26:45.650506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.812 [2024-06-08 21:26:45.651068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.812 [2024-06-08 21:26:45.651610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.812 [2024-06-08 21:26:45.651646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.812 [2024-06-08 21:26:45.651657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.812 [2024-06-08 21:26:45.651840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.812 [2024-06-08 21:26:45.651986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.812 [2024-06-08 21:26:45.651994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.812 [2024-06-08 21:26:45.652002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.812 [2024-06-08 21:26:45.654284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.662909] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.663485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.663917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.663928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.663935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.664061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.664222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.664232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.664239] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.666596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.675431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.676033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.676458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.676468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.676476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.676619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.676782] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.676790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.676797] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.678979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.687898] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.688464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.688889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.688898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.688905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.689085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.689264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.689273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.689280] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.691556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.700411] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.701000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.701456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.701466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.701474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.701635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.701742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.701750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.701757] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.703833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.713190] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.713783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.714214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.714223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.714230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.714355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.714540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.714549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.714556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.716664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.725478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.726053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.726476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.726486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.726493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.726691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.726835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.726842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.726849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.729219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.738070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.738644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.739078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.739090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.813 [2024-06-08 21:26:45.739097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.813 [2024-06-08 21:26:45.739295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.813 [2024-06-08 21:26:45.739461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.813 [2024-06-08 21:26:45.739470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.813 [2024-06-08 21:26:45.739477] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.813 [2024-06-08 21:26:45.741805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.813 [2024-06-08 21:26:45.750476] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.813 [2024-06-08 21:26:45.751022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.813 [2024-06-08 21:26:45.751447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.751456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.751464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.751588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.751731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.751738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.751745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.753966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.762868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.763421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.763853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.763862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.763869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.764012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.764173] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.764181] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.764188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.766430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.775539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.776102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.776520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.776541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.776551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.776658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.776818] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.776826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.776833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.779085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.788021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.788688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.789101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.789111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.789118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.789242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.789366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.789373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.789380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.791692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.800382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.800984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.801400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.801415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.801422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.801601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.801689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.801696] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.801703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.803978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.812943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.813592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.814038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.814051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.814060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.814226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.814373] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.814381] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.814388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.816693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.825544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.826028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.826614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.826650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.826661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.826824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.826953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.826960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.826968] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.814 [2024-06-08 21:26:45.829177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.814 [2024-06-08 21:26:45.837841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.814 [2024-06-08 21:26:45.838494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.838957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.814 [2024-06-08 21:26:45.838969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.814 [2024-06-08 21:26:45.838978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.814 [2024-06-08 21:26:45.839178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.814 [2024-06-08 21:26:45.839343] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.814 [2024-06-08 21:26:45.839357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.814 [2024-06-08 21:26:45.839364] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.815 [2024-06-08 21:26:45.841763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.815 [2024-06-08 21:26:45.850371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.815 [2024-06-08 21:26:45.850987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.851436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.851450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.815 [2024-06-08 21:26:45.851459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.815 [2024-06-08 21:26:45.851622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.815 [2024-06-08 21:26:45.851754] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.815 [2024-06-08 21:26:45.851763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.815 [2024-06-08 21:26:45.851770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.815 [2024-06-08 21:26:45.854102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.815 [2024-06-08 21:26:45.862837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.815 [2024-06-08 21:26:45.863435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.863820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.863832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.815 [2024-06-08 21:26:45.863841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.815 [2024-06-08 21:26:45.864022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.815 [2024-06-08 21:26:45.864187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.815 [2024-06-08 21:26:45.864195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.815 [2024-06-08 21:26:45.864202] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.815 [2024-06-08 21:26:45.866468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.815 [2024-06-08 21:26:45.875477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.815 [2024-06-08 21:26:45.876136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.876597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.876610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.815 [2024-06-08 21:26:45.876620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.815 [2024-06-08 21:26:45.876782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.815 [2024-06-08 21:26:45.876928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.815 [2024-06-08 21:26:45.876936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.815 [2024-06-08 21:26:45.876943] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.815 [2024-06-08 21:26:45.879206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.815 [2024-06-08 21:26:45.888158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.815 [2024-06-08 21:26:45.888751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.889175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.889185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.815 [2024-06-08 21:26:45.889192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.815 [2024-06-08 21:26:45.889354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.815 [2024-06-08 21:26:45.889483] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.815 [2024-06-08 21:26:45.889499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.815 [2024-06-08 21:26:45.889506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.815 [2024-06-08 21:26:45.891778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.815 [2024-06-08 21:26:45.900642] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.815 [2024-06-08 21:26:45.901343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.901791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.815 [2024-06-08 21:26:45.901804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:07.815 [2024-06-08 21:26:45.901814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:07.815 [2024-06-08 21:26:45.901977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:07.815 [2024-06-08 21:26:45.902142] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.815 [2024-06-08 21:26:45.902150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.815 [2024-06-08 21:26:45.902157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.077 [2024-06-08 21:26:45.904327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.077 [2024-06-08 21:26:45.913196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.077 [2024-06-08 21:26:45.913637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.077 [2024-06-08 21:26:45.914050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.077 [2024-06-08 21:26:45.914060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.077 [2024-06-08 21:26:45.914067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.077 [2024-06-08 21:26:45.914174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.077 [2024-06-08 21:26:45.914300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.077 [2024-06-08 21:26:45.914308] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.077 [2024-06-08 21:26:45.914315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.077 [2024-06-08 21:26:45.916538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.077 [2024-06-08 21:26:45.925765] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.077 [2024-06-08 21:26:45.926416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.077 [2024-06-08 21:26:45.926907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.077 [2024-06-08 21:26:45.926919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.077 [2024-06-08 21:26:45.926929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.077 [2024-06-08 21:26:45.927109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.077 [2024-06-08 21:26:45.927238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.077 [2024-06-08 21:26:45.927246] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.077 [2024-06-08 21:26:45.927259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.077 [2024-06-08 21:26:45.929435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:45.938275] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:45.938872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.939320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.939332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:45.939341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:45.939493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:45.939640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:45.939648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:45.939655] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:45.941769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:45.950793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:45.951449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.951792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.951804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:45.951813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:45.951994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:45.952177] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:45.952185] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:45.952193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:45.954446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:45.963177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:45.963838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.964191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.964204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:45.964213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:45.964392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:45.964493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:45.964502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:45.964510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:45.966630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:45.975868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:45.976506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.977021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.977034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:45.977043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:45.977206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:45.977370] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:45.977378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:45.977385] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:45.979634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:45.988331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:45.988921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.989342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:45.989351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:45.989359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:45.989509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:45.989653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:45.989660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:45.989667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:45.991848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:46.000955] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:46.001607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.001966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.001980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:46.001989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:46.002171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:46.002317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:46.002326] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:46.002333] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:46.004509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:46.013513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:46.014225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.014591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.014606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:46.014616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:46.014760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:46.014924] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:46.014932] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:46.014940] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:46.017149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.078 [2024-06-08 21:26:46.026042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.078 [2024-06-08 21:26:46.026728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.027077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.078 [2024-06-08 21:26:46.027089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.078 [2024-06-08 21:26:46.027098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.078 [2024-06-08 21:26:46.027243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.078 [2024-06-08 21:26:46.027415] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.078 [2024-06-08 21:26:46.027425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.078 [2024-06-08 21:26:46.027432] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.078 [2024-06-08 21:26:46.029819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.038606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.039216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.039670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.039685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.039694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.039875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.040021] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.040030] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.040037] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.042300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.051274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.051848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.052274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.052284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.052291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.052439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.052583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.052591] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.052598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.054796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.063725] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.064418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.064819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.064832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.064841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.065041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.065187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.065195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.065203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.067430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.076393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.077066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.077512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.077527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.077536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.077698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.077845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.077853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.077861] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.080083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.088890] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.089671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.090123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.090135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.090145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.090289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.090443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.090452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.090459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.092682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.101368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.101960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.102412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.102425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.102434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.102615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.102724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.102732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.102739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.079 [2024-06-08 21:26:46.104891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.079 [2024-06-08 21:26:46.113857] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.079 [2024-06-08 21:26:46.114501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.114919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.079 [2024-06-08 21:26:46.114933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.079 [2024-06-08 21:26:46.114942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.079 [2024-06-08 21:26:46.115123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.079 [2024-06-08 21:26:46.115250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.079 [2024-06-08 21:26:46.115259] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.079 [2024-06-08 21:26:46.115266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.080 [2024-06-08 21:26:46.117719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.080 [2024-06-08 21:26:46.126204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.080 [2024-06-08 21:26:46.126812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.127256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.127269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.080 [2024-06-08 21:26:46.127282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.080 [2024-06-08 21:26:46.127453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.080 [2024-06-08 21:26:46.127619] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.080 [2024-06-08 21:26:46.127627] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.080 [2024-06-08 21:26:46.127634] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.080 [2024-06-08 21:26:46.129878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.080 [2024-06-08 21:26:46.138706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.080 [2024-06-08 21:26:46.139363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.139920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.139933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.080 [2024-06-08 21:26:46.139943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.080 [2024-06-08 21:26:46.140125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.080 [2024-06-08 21:26:46.140253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.080 [2024-06-08 21:26:46.140261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.080 [2024-06-08 21:26:46.140269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.080 [2024-06-08 21:26:46.142607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.080 [2024-06-08 21:26:46.151286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.080 [2024-06-08 21:26:46.151955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.152415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.152428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.080 [2024-06-08 21:26:46.152438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.080 [2024-06-08 21:26:46.152656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.080 [2024-06-08 21:26:46.152822] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.080 [2024-06-08 21:26:46.152830] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.080 [2024-06-08 21:26:46.152837] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.080 [2024-06-08 21:26:46.154970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.080 [2024-06-08 21:26:46.163834] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.080 [2024-06-08 21:26:46.164485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.164931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.080 [2024-06-08 21:26:46.164944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.080 [2024-06-08 21:26:46.164953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.080 [2024-06-08 21:26:46.165101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.080 [2024-06-08 21:26:46.165248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.080 [2024-06-08 21:26:46.165256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.080 [2024-06-08 21:26:46.165264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.342 [2024-06-08 21:26:46.167604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.342 [2024-06-08 21:26:46.176186] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.342 [2024-06-08 21:26:46.176846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.177295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.177308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.342 [2024-06-08 21:26:46.177318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.342 [2024-06-08 21:26:46.177488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.342 [2024-06-08 21:26:46.177617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.342 [2024-06-08 21:26:46.177625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.342 [2024-06-08 21:26:46.177632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.342 [2024-06-08 21:26:46.179877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.342 [2024-06-08 21:26:46.188711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.342 [2024-06-08 21:26:46.189394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.189867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.189879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.342 [2024-06-08 21:26:46.189888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.342 [2024-06-08 21:26:46.190051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.342 [2024-06-08 21:26:46.190198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.342 [2024-06-08 21:26:46.190206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.342 [2024-06-08 21:26:46.190213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.342 [2024-06-08 21:26:46.192384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.342 [2024-06-08 21:26:46.201129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.342 [2024-06-08 21:26:46.201798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.202245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.202258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.342 [2024-06-08 21:26:46.202267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.342 [2024-06-08 21:26:46.202438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.342 [2024-06-08 21:26:46.202572] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.342 [2024-06-08 21:26:46.202580] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.342 [2024-06-08 21:26:46.202587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.342 [2024-06-08 21:26:46.204943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.342 [2024-06-08 21:26:46.213848] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.342 [2024-06-08 21:26:46.214501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.215003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-06-08 21:26:46.215016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.215025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.215188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.215316] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.215324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.215331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.217652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.226437] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.227089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.227541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.227556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.227565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.227692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.227820] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.227828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.227836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.230079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.238808] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.239375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.239929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.239965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.239976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.240159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.240287] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.240300] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.240307] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.242505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.251093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.251791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.252189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.252202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.252211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.252393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.252568] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.252577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.252584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.254757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.263647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.264131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.264708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.264745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.264755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.264936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.265138] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.265147] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.265155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.267398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.276361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.276968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.277390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.277400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.277414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.277539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.277701] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.277709] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.277720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.279989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.288537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.289191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.289738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.289774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.289785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.289948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.290133] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.290142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.290149] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.292394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.301141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.301777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.302130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.302143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.302151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.343 [2024-06-08 21:26:46.302332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.343 [2024-06-08 21:26:46.302470] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.343 [2024-06-08 21:26:46.302479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.343 [2024-06-08 21:26:46.302486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.343 [2024-06-08 21:26:46.304816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.343 [2024-06-08 21:26:46.313599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.343 [2024-06-08 21:26:46.314262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.314700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-06-08 21:26:46.314714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.343 [2024-06-08 21:26:46.314723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.314885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.315013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.315021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.315028] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.317258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.326103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.326789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.327228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.327241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.327250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.327376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.327512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.327521] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.327528] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.329861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.338617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.339204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.339664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.339678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.339688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.339832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.339923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.339931] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.339938] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.342126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.351149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.351747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.352192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.352205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.352214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.352358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.352495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.352504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.352511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.354757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.363446] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.364161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.364768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.364804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.364815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.364960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.365051] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.365058] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.365066] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.367370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.376050] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.376701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.377147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.377160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.377169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.377332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.377505] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.377514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.377521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.380057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.388389] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.388983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.389627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.389663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.389675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.389805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.389952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.389960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.389967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.392122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.400844] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.401557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.402016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.402029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.402038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.402256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.344 [2024-06-08 21:26:46.402429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.344 [2024-06-08 21:26:46.402439] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.344 [2024-06-08 21:26:46.402446] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.344 [2024-06-08 21:26:46.404650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.344 [2024-06-08 21:26:46.413340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.344 [2024-06-08 21:26:46.413958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.414416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-06-08 21:26:46.414429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.344 [2024-06-08 21:26:46.414438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.344 [2024-06-08 21:26:46.414620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.345 [2024-06-08 21:26:46.414803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.345 [2024-06-08 21:26:46.414817] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.345 [2024-06-08 21:26:46.414824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.345 [2024-06-08 21:26:46.417066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.345 [2024-06-08 21:26:46.425933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.345 [2024-06-08 21:26:46.426481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-06-08 21:26:46.426837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-06-08 21:26:46.426849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.345 [2024-06-08 21:26:46.426858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.345 [2024-06-08 21:26:46.426984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.345 [2024-06-08 21:26:46.427130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.345 [2024-06-08 21:26:46.427138] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.345 [2024-06-08 21:26:46.427145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.345 [2024-06-08 21:26:46.429337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.607 [2024-06-08 21:26:46.438286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.607 [2024-06-08 21:26:46.438889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.607 [2024-06-08 21:26:46.439335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.607 [2024-06-08 21:26:46.439348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.607 [2024-06-08 21:26:46.439357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.607 [2024-06-08 21:26:46.439547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.607 [2024-06-08 21:26:46.439658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.607 [2024-06-08 21:26:46.439665] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.607 [2024-06-08 21:26:46.439673] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.607 [2024-06-08 21:26:46.441974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.607 [2024-06-08 21:26:46.450921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.607 [2024-06-08 21:26:46.451659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.607 [2024-06-08 21:26:46.452023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.607 [2024-06-08 21:26:46.452035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.607 [2024-06-08 21:26:46.452045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.452189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.452335] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.452344] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.452351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.454688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.463551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.464259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.464725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.464740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.464749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.464912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.465059] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.465067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.465075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.467318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.476180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.476836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.477334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.477347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.477366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.477517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.477666] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.477674] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.477682] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.479980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.488556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.489134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.489627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.489663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.489674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.489856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.489984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.489992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.490000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.492253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.501000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.501684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.502135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.502147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.502157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.502319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.502455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.502464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.502471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.504842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.513382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.513951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.514373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.514382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.514390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.514506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.514632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.514640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.514647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.516872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.525779] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.526478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.526945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.526958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.526968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.527167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.527314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.527322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.527330] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.529526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.538411] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.539128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.539583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.539597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.539607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.608 [2024-06-08 21:26:46.539788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.608 [2024-06-08 21:26:46.539935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.608 [2024-06-08 21:26:46.539942] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.608 [2024-06-08 21:26:46.539950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.608 [2024-06-08 21:26:46.542137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.608 [2024-06-08 21:26:46.550889] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.608 [2024-06-08 21:26:46.551504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.551949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.608 [2024-06-08 21:26:46.551961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.608 [2024-06-08 21:26:46.551971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.552115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.552285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.552293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.552300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.554498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.563546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.564155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.564574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.564584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.564592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.564716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.564841] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.564848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.564855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.567238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.576059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.576699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.577145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.577158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.577167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.577311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.577447] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.577456] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.577463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.579925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.588645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.589281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.589636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.589650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.589659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.589785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.589894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.589906] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.589914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.592215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.601286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.601931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.602379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.602391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.602400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.602589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.602718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.602726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.602733] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.605068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.613739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.614362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.614787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.614797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.614805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.614930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.615110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.615118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.615124] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.617619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.626203] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.626838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.627262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.627271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.627278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.627385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.627534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.627542] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.627553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.629900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.638499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.639129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.639676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.639712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.639723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.639904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.640051] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.609 [2024-06-08 21:26:46.640060] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.609 [2024-06-08 21:26:46.640067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.609 [2024-06-08 21:26:46.642337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.609 [2024-06-08 21:26:46.651225] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.609 [2024-06-08 21:26:46.651943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.652394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.609 [2024-06-08 21:26:46.652414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.609 [2024-06-08 21:26:46.652424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.609 [2024-06-08 21:26:46.652606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.609 [2024-06-08 21:26:46.652771] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.610 [2024-06-08 21:26:46.652779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.610 [2024-06-08 21:26:46.652786] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.610 [2024-06-08 21:26:46.655081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.610 [2024-06-08 21:26:46.663653] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.610 [2024-06-08 21:26:46.664338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.664852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.664866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.610 [2024-06-08 21:26:46.664876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.610 [2024-06-08 21:26:46.665039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.610 [2024-06-08 21:26:46.665204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.610 [2024-06-08 21:26:46.665212] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.610 [2024-06-08 21:26:46.665223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.610 [2024-06-08 21:26:46.667450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.610 [2024-06-08 21:26:46.675954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.610 [2024-06-08 21:26:46.676412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.676924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.676961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.610 [2024-06-08 21:26:46.676971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.610 [2024-06-08 21:26:46.677152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.610 [2024-06-08 21:26:46.677300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.610 [2024-06-08 21:26:46.677308] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.610 [2024-06-08 21:26:46.677316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.610 [2024-06-08 21:26:46.679427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.610 [2024-06-08 21:26:46.688516] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.610 [2024-06-08 21:26:46.689229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.689691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.610 [2024-06-08 21:26:46.689706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.610 [2024-06-08 21:26:46.689715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.610 [2024-06-08 21:26:46.689896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.610 [2024-06-08 21:26:46.690061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.610 [2024-06-08 21:26:46.690069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.610 [2024-06-08 21:26:46.690077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.610 [2024-06-08 21:26:46.692393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.701031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.701686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.702135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.702147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.702156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.702282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.702472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.702480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.702488] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.704673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.713512] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.714087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.714619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.714657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.714667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.714884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.715069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.715077] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.715084] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.717367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.725977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.726688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.727046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.727060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.727069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.727213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.727323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.727330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.727337] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.729533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.738316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.738891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.739306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.739315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.739323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.739471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.739634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.739641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.739648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.741994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.750702] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.751273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.751801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.751837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.751848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.752029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.752157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.752165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.752172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.754495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.763288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.763947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.764413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.764427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.764436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.764562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.764708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.764716] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.764724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.766966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.776007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.776696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.777132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.777145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.777154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.873 [2024-06-08 21:26:46.777317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.873 [2024-06-08 21:26:46.777488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.873 [2024-06-08 21:26:46.777497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.873 [2024-06-08 21:26:46.777505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.873 [2024-06-08 21:26:46.779674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.873 [2024-06-08 21:26:46.788468] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.873 [2024-06-08 21:26:46.789003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.789572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.873 [2024-06-08 21:26:46.789609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.873 [2024-06-08 21:26:46.789619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.789763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.789911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.789919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.789927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.792046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.800866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.801506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.801961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.801974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.801983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.802128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.802256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.802264] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.802271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.804612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.813064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.813721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.814175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.814187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.814196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.814359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.814513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.814522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.814529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.816904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.825439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.826077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.826528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.826542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.826556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.826701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.826887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.826895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.826902] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.829035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.837825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.838449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.838906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.838917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.838925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.839109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.839235] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.839243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.839250] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.841494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.850403] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.851021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.851441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.851459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.851467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.851578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.851666] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.851673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.851681] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.853999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.862800] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.863422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.863847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.863856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.863863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.864012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.864155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.864163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.864170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.866258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.875320] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.875986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.876481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.876496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.876505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.876631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.874 [2024-06-08 21:26:46.876758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.874 [2024-06-08 21:26:46.876767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.874 [2024-06-08 21:26:46.876775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.874 [2024-06-08 21:26:46.879077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.874 [2024-06-08 21:26:46.888025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.874 [2024-06-08 21:26:46.888747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.889196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.874 [2024-06-08 21:26:46.889208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.874 [2024-06-08 21:26:46.889217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.874 [2024-06-08 21:26:46.889399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.889533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.889541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.889548] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.891750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.875 [2024-06-08 21:26:46.900290] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.875 [2024-06-08 21:26:46.900928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.901351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.901361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.875 [2024-06-08 21:26:46.901369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.875 [2024-06-08 21:26:46.901576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.901739] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.901749] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.901755] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.904124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.875 [2024-06-08 21:26:46.912667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.875 [2024-06-08 21:26:46.913263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.913781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.913818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.875 [2024-06-08 21:26:46.913829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.875 [2024-06-08 21:26:46.913992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.914102] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.914110] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.914117] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.916443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.875 [2024-06-08 21:26:46.925228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.875 [2024-06-08 21:26:46.925797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.926221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.926230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.875 [2024-06-08 21:26:46.926238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.875 [2024-06-08 21:26:46.926381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.926566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.926575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.926582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.928649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.875 [2024-06-08 21:26:46.937762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.875 [2024-06-08 21:26:46.938360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.938916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.938953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.875 [2024-06-08 21:26:46.938963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.875 [2024-06-08 21:26:46.939163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.939334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.939343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.939351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.941601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.875 [2024-06-08 21:26:46.950311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.875 [2024-06-08 21:26:46.950875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.951296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.875 [2024-06-08 21:26:46.951305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:08.875 [2024-06-08 21:26:46.951313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:08.875 [2024-06-08 21:26:46.951498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:08.875 [2024-06-08 21:26:46.951605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.875 [2024-06-08 21:26:46.951613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.875 [2024-06-08 21:26:46.951619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.875 [2024-06-08 21:26:46.953838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.138 [2024-06-08 21:26:46.963062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.138 [2024-06-08 21:26:46.963688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.964193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.964206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.138 [2024-06-08 21:26:46.964216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.138 [2024-06-08 21:26:46.964378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.138 [2024-06-08 21:26:46.964549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.138 [2024-06-08 21:26:46.964558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.138 [2024-06-08 21:26:46.964565] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.138 [2024-06-08 21:26:46.966843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.138 [2024-06-08 21:26:46.975305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.138 [2024-06-08 21:26:46.975897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.976313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.976323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.138 [2024-06-08 21:26:46.976331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.138 [2024-06-08 21:26:46.976459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.138 [2024-06-08 21:26:46.976658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.138 [2024-06-08 21:26:46.976666] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.138 [2024-06-08 21:26:46.976677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.138 [2024-06-08 21:26:46.978861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.138 [2024-06-08 21:26:46.987970] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.138 [2024-06-08 21:26:46.989040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.989572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:46.989609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.138 [2024-06-08 21:26:46.989621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.138 [2024-06-08 21:26:46.989731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.138 [2024-06-08 21:26:46.989822] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.138 [2024-06-08 21:26:46.989830] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.138 [2024-06-08 21:26:46.989837] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.138 [2024-06-08 21:26:46.992238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.138 [2024-06-08 21:26:47.000479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.138 [2024-06-08 21:26:47.001044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:47.001574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:47.001611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.138 [2024-06-08 21:26:47.001622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.138 [2024-06-08 21:26:47.001803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.138 [2024-06-08 21:26:47.001968] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.138 [2024-06-08 21:26:47.001977] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.138 [2024-06-08 21:26:47.001984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.138 [2024-06-08 21:26:47.004048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.138 [2024-06-08 21:26:47.012956] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.138 [2024-06-08 21:26:47.013646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:47.013984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.138 [2024-06-08 21:26:47.013997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.138 [2024-06-08 21:26:47.014006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.138 [2024-06-08 21:26:47.014150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.138 [2024-06-08 21:26:47.014296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.138 [2024-06-08 21:26:47.014304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.138 [2024-06-08 21:26:47.014316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.016549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.025723] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.026296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.026744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.026755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.026762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.026979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.027123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.027132] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.027138] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.029336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.038128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.038714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.039134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.039143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.039150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.039349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.039495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.039504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.039511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.041653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.050501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.051168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.051638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.051653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.051662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.051825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.051972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.051980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.051987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.054269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.063080] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.063605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.064024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.064033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.064040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.064147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.064327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.064335] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.064341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.066546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.075575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.076187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.076694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.076731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.076741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.076923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.077069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.077077] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.077085] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.079366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.088041] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.088410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.088768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.088779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.088787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.088915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.089041] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.089049] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.089056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.091388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.100584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.101271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.101813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.139 [2024-06-08 21:26:47.101827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.139 [2024-06-08 21:26:47.101836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.139 [2024-06-08 21:26:47.101999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.139 [2024-06-08 21:26:47.102127] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.139 [2024-06-08 21:26:47.102135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.139 [2024-06-08 21:26:47.102142] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.139 [2024-06-08 21:26:47.104202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.139 [2024-06-08 21:26:47.113006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.139 [2024-06-08 21:26:47.113630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.114049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.114058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.114066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.114191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.114298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.114306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.114312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.116570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.125525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.126122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.126541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.126551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.126559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.126720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.126845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.126853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.126860] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.129041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.137859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.138665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.139120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.139133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.139142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.139323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.139477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.139487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.139494] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.141701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.150386] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.151074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.151519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.151534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.151543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.151742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.151852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.151861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.151868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.154072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.162783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.163347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.163783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.163793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.163800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.163925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.164106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.164115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.164122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.166267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.175367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.176020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.176473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.176488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.176501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.176645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.176829] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.176837] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.176844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.179018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.187959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.188653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.188972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.188985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.188994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.189156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.140 [2024-06-08 21:26:47.189285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.140 [2024-06-08 21:26:47.189293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.140 [2024-06-08 21:26:47.189301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.140 [2024-06-08 21:26:47.191694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.140 [2024-06-08 21:26:47.200455] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.140 [2024-06-08 21:26:47.201020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.201437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.140 [2024-06-08 21:26:47.201447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.140 [2024-06-08 21:26:47.201455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.140 [2024-06-08 21:26:47.201598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.141 [2024-06-08 21:26:47.201723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.141 [2024-06-08 21:26:47.201731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.141 [2024-06-08 21:26:47.201738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.141 [2024-06-08 21:26:47.203995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.141 [2024-06-08 21:26:47.212777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.141 [2024-06-08 21:26:47.213459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.141 [2024-06-08 21:26:47.213921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.141 [2024-06-08 21:26:47.213933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.141 [2024-06-08 21:26:47.213947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.141 [2024-06-08 21:26:47.214091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.141 [2024-06-08 21:26:47.214219] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.141 [2024-06-08 21:26:47.214227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.141 [2024-06-08 21:26:47.214234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.141 [2024-06-08 21:26:47.216704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.141 [2024-06-08 21:26:47.225353] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.141 [2024-06-08 21:26:47.225931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.141 [2024-06-08 21:26:47.226268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.141 [2024-06-08 21:26:47.226282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.141 [2024-06-08 21:26:47.226291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.141 [2024-06-08 21:26:47.226498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.141 [2024-06-08 21:26:47.226664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.141 [2024-06-08 21:26:47.226672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.141 [2024-06-08 21:26:47.226679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.403 [2024-06-08 21:26:47.228956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.403 [2024-06-08 21:26:47.237796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.403 [2024-06-08 21:26:47.238375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.238880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.238891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.403 [2024-06-08 21:26:47.238898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.403 [2024-06-08 21:26:47.239042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.403 [2024-06-08 21:26:47.239222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.403 [2024-06-08 21:26:47.239230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.403 [2024-06-08 21:26:47.239237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.403 [2024-06-08 21:26:47.241550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.403 [2024-06-08 21:26:47.250330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.403 [2024-06-08 21:26:47.250940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.251371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.251380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.403 [2024-06-08 21:26:47.251388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.403 [2024-06-08 21:26:47.251522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.403 [2024-06-08 21:26:47.251686] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.403 [2024-06-08 21:26:47.251693] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.403 [2024-06-08 21:26:47.251700] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.403 [2024-06-08 21:26:47.253933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.403 [2024-06-08 21:26:47.262874] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.403 [2024-06-08 21:26:47.263329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.263899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.263936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.403 [2024-06-08 21:26:47.263947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.403 [2024-06-08 21:26:47.264110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.403 [2024-06-08 21:26:47.264276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.403 [2024-06-08 21:26:47.264284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.403 [2024-06-08 21:26:47.264291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.403 [2024-06-08 21:26:47.266483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.403 [2024-06-08 21:26:47.275291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.403 [2024-06-08 21:26:47.275973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.276407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.276422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.403 [2024-06-08 21:26:47.276431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.403 [2024-06-08 21:26:47.276575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.403 [2024-06-08 21:26:47.276704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.403 [2024-06-08 21:26:47.276712] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.403 [2024-06-08 21:26:47.276719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.403 [2024-06-08 21:26:47.279040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.403 [2024-06-08 21:26:47.287747] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.403 [2024-06-08 21:26:47.288319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.288788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.403 [2024-06-08 21:26:47.288825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.403 [2024-06-08 21:26:47.288835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.403 [2024-06-08 21:26:47.288980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.289112] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.289121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.289128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.291335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.300252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.300898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.301331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.301341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.301349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.301481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.301645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.301653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.301660] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.303788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.312687] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.313380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.313793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.313807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.313816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.313978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.314106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.314114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.314121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.316294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.325125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.325750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.326087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.326096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.326104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.326229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.326354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.326367] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.326374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.328559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.337701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.338239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.338780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.338817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.338828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.338972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.339100] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.339108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.339117] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.341330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.350228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.350776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.351234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.351247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.351256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.351400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.351536] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.351544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.351551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.353756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.362618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.363237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.363829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.363844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.363854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.363979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.364163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.364171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.364183] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.366337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.375176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.375780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.376114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.376123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.376131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.376275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.376381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.404 [2024-06-08 21:26:47.376389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.404 [2024-06-08 21:26:47.376395] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.404 [2024-06-08 21:26:47.378691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.404 [2024-06-08 21:26:47.387693] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.404 [2024-06-08 21:26:47.388369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.388777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.404 [2024-06-08 21:26:47.388790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.404 [2024-06-08 21:26:47.388800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.404 [2024-06-08 21:26:47.389000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.404 [2024-06-08 21:26:47.389128] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.389137] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.389144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.391555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.400249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.400677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.401064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.401074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.401082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.401228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.401414] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.401423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.401430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.403594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.412688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.413476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.414005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.414018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.414027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.414171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.414299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.414307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.414314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.416544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.424913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.425481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.425939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.425948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.425956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.426118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.426281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.426288] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.426295] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.428646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.437366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.437981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.438429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.438439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.438447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.438552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.438678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.438685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.438692] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.440950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.449854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.450506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.451015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.451028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.451037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.451182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.451346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.451355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.451362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.453628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.462423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.463089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.463525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.463535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.463542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.463724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.463867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.463875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.463881] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.466168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.474732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.475188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.475630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.475640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.475648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.475808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.475933] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.475940] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.405 [2024-06-08 21:26:47.475947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.405 [2024-06-08 21:26:47.478328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.405 [2024-06-08 21:26:47.487081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.405 [2024-06-08 21:26:47.487640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.488073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.405 [2024-06-08 21:26:47.488083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.405 [2024-06-08 21:26:47.488091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.405 [2024-06-08 21:26:47.488215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.405 [2024-06-08 21:26:47.488358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.405 [2024-06-08 21:26:47.488367] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.406 [2024-06-08 21:26:47.488373] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.406 [2024-06-08 21:26:47.490521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.667 [2024-06-08 21:26:47.499484] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.667 [2024-06-08 21:26:47.500066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.500482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.500492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.667 [2024-06-08 21:26:47.500499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.667 [2024-06-08 21:26:47.500678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.667 [2024-06-08 21:26:47.500840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.667 [2024-06-08 21:26:47.500848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.667 [2024-06-08 21:26:47.500856] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.667 [2024-06-08 21:26:47.503090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.667 [2024-06-08 21:26:47.512131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.667 [2024-06-08 21:26:47.512751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.513188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.513197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.667 [2024-06-08 21:26:47.513205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.667 [2024-06-08 21:26:47.513347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.667 [2024-06-08 21:26:47.513476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.667 [2024-06-08 21:26:47.513484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.667 [2024-06-08 21:26:47.513491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.667 [2024-06-08 21:26:47.515961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.667 [2024-06-08 21:26:47.524616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.667 [2024-06-08 21:26:47.525266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.525740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.525759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.667 [2024-06-08 21:26:47.525769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.667 [2024-06-08 21:26:47.525969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.667 [2024-06-08 21:26:47.526153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.667 [2024-06-08 21:26:47.526162] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.667 [2024-06-08 21:26:47.526169] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.667 [2024-06-08 21:26:47.528416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.667 [2024-06-08 21:26:47.537018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.667 [2024-06-08 21:26:47.537471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.537920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.537929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.667 [2024-06-08 21:26:47.537937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.667 [2024-06-08 21:26:47.538044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.667 [2024-06-08 21:26:47.538223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.667 [2024-06-08 21:26:47.538231] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.667 [2024-06-08 21:26:47.538238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.667 [2024-06-08 21:26:47.540661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.667 [2024-06-08 21:26:47.549598] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.667 [2024-06-08 21:26:47.550182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.550482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.667 [2024-06-08 21:26:47.550494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.550502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.550664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.550844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.550853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.550859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.553297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.562177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.562805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.563251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.563263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.563281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.563470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.563635] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.563644] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.563651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.565800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.574794] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.575448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.575952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.575964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.575974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.576137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.576284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.576292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.576300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.578456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.587212] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.587930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.588376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.588389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.588398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.588551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.588698] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.588706] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.588713] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.590740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.599839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.600501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.600973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.600986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.600995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.601163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.601310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.601317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.601325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.603373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.612195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.612917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.613355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.613368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.613377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.613549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.613732] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.613741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.613748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.615861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.624694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.625294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.625821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.625858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.625868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.626088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.626216] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.626225] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.626233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.628652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.637208] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.637921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.638374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.638387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.638396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.638567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.638719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.638727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.638734] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.641126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.649567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.650109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.650618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.650655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.650666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.650848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.650958] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.650966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.650973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.653262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.662016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.662667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.663115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.663128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.663137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.663355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.663527] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.663537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.663544] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.665713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.674697] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.675303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.675803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.675817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.675826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.675989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.676135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.676148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.676156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.678509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.687090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.687750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.688189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.688201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.688211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.688336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.688528] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.688537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.688545] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.690568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.699661] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.700339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.700784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.700798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.700807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.700951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.701116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.701124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.701132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.703358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.712010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.712702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.713154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.713166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.668 [2024-06-08 21:26:47.713175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.668 [2024-06-08 21:26:47.713338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.668 [2024-06-08 21:26:47.713493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.668 [2024-06-08 21:26:47.713502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.668 [2024-06-08 21:26:47.713514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.668 [2024-06-08 21:26:47.715923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.668 [2024-06-08 21:26:47.724463] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.668 [2024-06-08 21:26:47.725147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.668 [2024-06-08 21:26:47.725410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.669 [2024-06-08 21:26:47.725434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.669 [2024-06-08 21:26:47.725445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.669 [2024-06-08 21:26:47.725572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.669 [2024-06-08 21:26:47.725738] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.669 [2024-06-08 21:26:47.725746] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.669 [2024-06-08 21:26:47.725754] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.669 [2024-06-08 21:26:47.727998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.669 [2024-06-08 21:26:47.736925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.669 [2024-06-08 21:26:47.737586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.669 [2024-06-08 21:26:47.738055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.669 [2024-06-08 21:26:47.738067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.669 [2024-06-08 21:26:47.738076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.669 [2024-06-08 21:26:47.738276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.669 [2024-06-08 21:26:47.738385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.669 [2024-06-08 21:26:47.738394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.669 [2024-06-08 21:26:47.738410] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.669 [2024-06-08 21:26:47.740546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.669 [2024-06-08 21:26:47.749218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.669 [2024-06-08 21:26:47.749838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.669 [2024-06-08 21:26:47.750289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.669 [2024-06-08 21:26:47.750301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.669 [2024-06-08 21:26:47.750310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.669 [2024-06-08 21:26:47.750479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.669 [2024-06-08 21:26:47.750645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.669 [2024-06-08 21:26:47.750653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.669 [2024-06-08 21:26:47.750660] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.669 [2024-06-08 21:26:47.752819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.930 [2024-06-08 21:26:47.761732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.930 [2024-06-08 21:26:47.762331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-06-08 21:26:47.762921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-06-08 21:26:47.762958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.930 [2024-06-08 21:26:47.762969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.930 [2024-06-08 21:26:47.763150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.930 [2024-06-08 21:26:47.763297] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.930 [2024-06-08 21:26:47.763305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.763312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.765585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.774392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.775057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.775638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.775674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.775686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.775869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.775997] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.776005] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.776012] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.778203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.786833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.787489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.787960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.787972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.787982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.788125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.788290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.788298] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.788305] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.790387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.799136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.799703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.800149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.800161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.800170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.800297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.800388] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.800396] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.800410] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.802652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.811258] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.811840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.812290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.812303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.812312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.812465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.812613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.812621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.812628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.814872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.823737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.824427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.824848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.824861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.824871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.825052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.825200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.825209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.825216] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.827594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.836225] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.836860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.837306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.837319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.837328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.837461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.837608] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.837616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.837623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.839938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.848623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.849188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.849641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.849656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.849665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.849865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.850011] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.850019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.850027] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.852308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.860997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.861609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.862052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.862065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.862074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.862274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.862429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.862438] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.862445] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.864706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.873431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.874083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.874530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.874549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.874558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.874721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.874867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.874876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.931 [2024-06-08 21:26:47.874883] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.931 [2024-06-08 21:26:47.877127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.931 [2024-06-08 21:26:47.885876] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.931 [2024-06-08 21:26:47.886584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.887052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-06-08 21:26:47.887065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.931 [2024-06-08 21:26:47.887074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.931 [2024-06-08 21:26:47.887255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.931 [2024-06-08 21:26:47.887448] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.931 [2024-06-08 21:26:47.887457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.887464] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.889489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.898194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.898870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.899317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.899330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.899340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.899511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.899678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.899686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.899693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.901859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.910850] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.911505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.911954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.911967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.911980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.912125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.912290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.912299] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.912306] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.914521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.923711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.924343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.924707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.924721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.924730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.924877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.925061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.925069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.925076] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.927305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.936135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.936799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.937244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.937257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.937266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.937391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.937511] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.937520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.937527] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.939733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.948769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.949415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.949927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.949940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.949949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.950097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.950299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.950307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.950314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.952488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.961178] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.961856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.962196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.962208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.962217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.962361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.962480] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.962489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.962496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.964591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.973682] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.974346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.974799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.974813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.974823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.974948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.975095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.975103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.975110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.977300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.986256] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.986967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.987415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.987429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.987439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.987620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.987827] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.932 [2024-06-08 21:26:47.987835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.932 [2024-06-08 21:26:47.987842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.932 [2024-06-08 21:26:47.990121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.932 [2024-06-08 21:26:47.998619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.932 [2024-06-08 21:26:47.999175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.999518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-06-08 21:26:47.999529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.932 [2024-06-08 21:26:47.999536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.932 [2024-06-08 21:26:47.999680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.932 [2024-06-08 21:26:47.999824] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.933 [2024-06-08 21:26:47.999832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.933 [2024-06-08 21:26:47.999839] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.933 [2024-06-08 21:26:48.002112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:09.933 [2024-06-08 21:26:48.011105] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:09.933 [2024-06-08 21:26:48.011742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-06-08 21:26:48.012188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-06-08 21:26:48.012201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:09.933 [2024-06-08 21:26:48.012210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:09.933 [2024-06-08 21:26:48.012418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:09.933 [2024-06-08 21:26:48.012621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:09.933 [2024-06-08 21:26:48.012630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:09.933 [2024-06-08 21:26:48.012637] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:09.933 [2024-06-08 21:26:48.014973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.195 [2024-06-08 21:26:48.023568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.195 [2024-06-08 21:26:48.024231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.024680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.024694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.195 [2024-06-08 21:26:48.024703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.195 [2024-06-08 21:26:48.024829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.195 [2024-06-08 21:26:48.024993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.195 [2024-06-08 21:26:48.025006] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.195 [2024-06-08 21:26:48.025013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.195 [2024-06-08 21:26:48.027202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.195 [2024-06-08 21:26:48.035997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.195 [2024-06-08 21:26:48.036694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.037140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.037153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.195 [2024-06-08 21:26:48.037162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.195 [2024-06-08 21:26:48.037343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.195 [2024-06-08 21:26:48.037516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.195 [2024-06-08 21:26:48.037525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.195 [2024-06-08 21:26:48.037532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.195 [2024-06-08 21:26:48.039681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.195 [2024-06-08 21:26:48.048628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.195 [2024-06-08 21:26:48.049327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.049787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.049801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.195 [2024-06-08 21:26:48.049810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.195 [2024-06-08 21:26:48.049954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.195 [2024-06-08 21:26:48.050082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.195 [2024-06-08 21:26:48.050090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.195 [2024-06-08 21:26:48.050098] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.195 [2024-06-08 21:26:48.052288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.195 [2024-06-08 21:26:48.060904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.195 [2024-06-08 21:26:48.061595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.062046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.062059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.195 [2024-06-08 21:26:48.062068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.195 [2024-06-08 21:26:48.062194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.195 [2024-06-08 21:26:48.062323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.195 [2024-06-08 21:26:48.062331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.195 [2024-06-08 21:26:48.062342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.195 [2024-06-08 21:26:48.064538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.195 [2024-06-08 21:26:48.073304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.195 [2024-06-08 21:26:48.073976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.074423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.195 [2024-06-08 21:26:48.074437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.195 [2024-06-08 21:26:48.074446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.074609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.074737] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.074745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.074752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.077254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.085816] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.086508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.086858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.086871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.086880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.087043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.087189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.087197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.087204] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.089544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.098483] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.099182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.099554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.099569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.099578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.099722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.099832] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.099840] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.099847] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.102202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.111103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.111697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.112181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.112193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.112203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.112384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.112540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.112549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.112556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.114813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.123497] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.124181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.124719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.124733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.124742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.124904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.125032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.125040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.125047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.127328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.135935] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.136545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.136964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.136974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.136982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.137143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.137269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.137276] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.137283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.139282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.148365] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.149025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.149469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.149483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.149492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.149636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.149745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.149753] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.149761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.152064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.161016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.161694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.162140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.162152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.162161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.162305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.196 [2024-06-08 21:26:48.162442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.196 [2024-06-08 21:26:48.162450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.196 [2024-06-08 21:26:48.162458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.196 [2024-06-08 21:26:48.164831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.196 [2024-06-08 21:26:48.173383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.196 [2024-06-08 21:26:48.173932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.174379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.196 [2024-06-08 21:26:48.174392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.196 [2024-06-08 21:26:48.174409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.196 [2024-06-08 21:26:48.174573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.174757] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.174773] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.174781] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.177169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.185951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.186637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.187084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.187097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.187106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.187288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.187424] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.187433] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.187440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.189538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.198325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.198906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.199322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.199332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.199339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.199506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.199669] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.199676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.199683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.201919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.210822] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.211427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.211943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.211956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.211965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.212164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.212311] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.212319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.212327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.214631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.223200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.223717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.224163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.224182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.224191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.224354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.224455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.224464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.224471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.226826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.235562] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.236208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.236745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.236781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.236792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.236936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.237120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.237128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.237135] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.239566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.248279] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.248936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.249363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.249373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.249380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.249531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.249694] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.249702] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.249709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.252018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.261061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.261763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.262209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.262222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.262235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.262343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.262496] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.197 [2024-06-08 21:26:48.262505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.197 [2024-06-08 21:26:48.262512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.197 [2024-06-08 21:26:48.264790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.197 [2024-06-08 21:26:48.273632] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.197 [2024-06-08 21:26:48.274297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.274750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.197 [2024-06-08 21:26:48.274764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.197 [2024-06-08 21:26:48.274774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.197 [2024-06-08 21:26:48.274937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.197 [2024-06-08 21:26:48.275065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.198 [2024-06-08 21:26:48.275073] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.198 [2024-06-08 21:26:48.275080] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.198 [2024-06-08 21:26:48.277287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.286169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.286762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.287280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.287293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.287302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.287511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.287658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.287666] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.287674] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.289840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.298541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.299268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.299742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.299756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.299765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.299988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.300117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.300133] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.300141] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.302369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.311145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.311807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.312253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.312265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.312274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.312427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.312556] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.312564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.312571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.314794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.323726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.324380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.324853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.324866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.324875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.325038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.325204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.325212] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.325219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.327574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.336485] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.337131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.337464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.337478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.337487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.337649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.337800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.337808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.337815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.340204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.348767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.349449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.349896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.349908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.349917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.350080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.350263] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.350271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.350279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.352608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.361275] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.361952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.362399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.362421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.362430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.362611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.362758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.362766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.461 [2024-06-08 21:26:48.362774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.461 [2024-06-08 21:26:48.365053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.461 [2024-06-08 21:26:48.373663] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.461 [2024-06-08 21:26:48.374348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.374864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.461 [2024-06-08 21:26:48.374878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.461 [2024-06-08 21:26:48.374887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.461 [2024-06-08 21:26:48.375070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.461 [2024-06-08 21:26:48.375235] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.461 [2024-06-08 21:26:48.375247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.375255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.377595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.386180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.386877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.387331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.387343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.387353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.387525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.387617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.387625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.387632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.389951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.398735] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.399330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.399810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.399821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.399828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.399953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.400113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.400121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.400128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.402307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.411237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.411927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.412376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.412388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.412397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.412588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.412753] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.412761] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.412773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.415051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.423683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.424286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.424739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.424749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.424757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.424938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.425082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.425089] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.425096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.427350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.436288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.436860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.437350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.437359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.437366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.437533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.437696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.437704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.437711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.439982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.448753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.449354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.449768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.449778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.449785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.449947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.450090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.450097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.450105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.452471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 [2024-06-08 21:26:48.461207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.461863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.462310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.462322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.462331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.462 [2024-06-08 21:26:48.462485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.462 [2024-06-08 21:26:48.462632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.462 [2024-06-08 21:26:48.462640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.462 [2024-06-08 21:26:48.462647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.462 [2024-06-08 21:26:48.464870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2575394 Killed "${NVMF_APP[@]}" "$@" 00:31:10.462 21:26:48 -- host/bdevperf.sh@36 -- # tgt_init 00:31:10.462 21:26:48 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:10.462 21:26:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:10.462 21:26:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:10.462 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:10.462 [2024-06-08 21:26:48.473627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.462 [2024-06-08 21:26:48.474353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.474871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.462 [2024-06-08 21:26:48.474884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.462 [2024-06-08 21:26:48.474894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.475075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.475222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.475230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.475237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 21:26:48 -- nvmf/common.sh@469 -- # nvmfpid=2577031 00:31:10.463 21:26:48 -- nvmf/common.sh@470 -- # waitforlisten 2577031 00:31:10.463 21:26:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:10.463 21:26:48 -- common/autotest_common.sh@819 -- # '[' -z 2577031 ']' 00:31:10.463 21:26:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.463 [2024-06-08 21:26:48.477538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 21:26:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:10.463 21:26:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.463 21:26:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:10.463 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:10.463 [2024-06-08 21:26:48.486115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.486689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.487180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.487193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.487202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.487365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.487520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.487530] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.487538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 [2024-06-08 21:26:48.489966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 [2024-06-08 21:26:48.498688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.499384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.499743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.499756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.499765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.499891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.500000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.500008] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.500016] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 [2024-06-08 21:26:48.502471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 [2024-06-08 21:26:48.511136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.511767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.512219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.512232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.512241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.512367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.512540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.512549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.512557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 [2024-06-08 21:26:48.514853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 [2024-06-08 21:26:48.523684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.523743] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:10.463 [2024-06-08 21:26:48.523794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.463 [2024-06-08 21:26:48.524327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.524804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.524821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.524831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.525014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.525125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.525135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.525143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 [2024-06-08 21:26:48.527183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 [2024-06-08 21:26:48.536229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.536703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.537133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.537143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.537151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.537277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.537408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.537416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.537423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.463 [2024-06-08 21:26:48.539679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.463 [2024-06-08 21:26:48.548712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.463 [2024-06-08 21:26:48.549282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.549725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.463 [2024-06-08 21:26:48.549736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.463 [2024-06-08 21:26:48.549743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.463 [2024-06-08 21:26:48.549849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.463 [2024-06-08 21:26:48.549973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.463 [2024-06-08 21:26:48.549981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.463 [2024-06-08 21:26:48.549988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.726 [2024-06-08 21:26:48.552279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.726 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.726 [2024-06-08 21:26:48.561140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.726 [2024-06-08 21:26:48.561706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.562136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.562146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.726 [2024-06-08 21:26:48.562153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.726 [2024-06-08 21:26:48.562296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.726 [2024-06-08 21:26:48.562482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.726 [2024-06-08 21:26:48.562490] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.726 [2024-06-08 21:26:48.562497] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.726 [2024-06-08 21:26:48.564642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.726 [2024-06-08 21:26:48.573892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.726 [2024-06-08 21:26:48.574634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.575086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.575099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.726 [2024-06-08 21:26:48.575108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.726 [2024-06-08 21:26:48.575290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.726 [2024-06-08 21:26:48.575462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.726 [2024-06-08 21:26:48.575471] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.726 [2024-06-08 21:26:48.575478] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.726 [2024-06-08 21:26:48.577721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.726 [2024-06-08 21:26:48.586387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.726 [2024-06-08 21:26:48.587050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.587645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-06-08 21:26:48.587682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.726 [2024-06-08 21:26:48.587693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.587894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.588040] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.588048] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.588056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.590157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.598936] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.599379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.599920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.599957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.599967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.600093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.600241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.600249] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.600257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.602708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.608222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:10.727 [2024-06-08 21:26:48.611371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.611970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.612246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.612256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.612264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.612414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.612540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.612549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.612556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.614791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.623928] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.624628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.625083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.625096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.625106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.625214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.625379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.625388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.625397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.627635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.636375] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.637023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.637632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.637671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.637684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.637834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.637982] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.637990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.637998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.640105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.648866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.649425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.649933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.649943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.649951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.650094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.650256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.650265] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.650272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.652501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.660463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:10.727 [2024-06-08 21:26:48.660552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.727 [2024-06-08 21:26:48.660558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.727 [2024-06-08 21:26:48.660563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.727 [2024-06-08 21:26:48.660606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.727 [2024-06-08 21:26:48.660762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.727 [2024-06-08 21:26:48.660763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.727 [2024-06-08 21:26:48.661441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.661998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.662428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.662441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.662448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.662555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.662682] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.727 [2024-06-08 21:26:48.662696] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.727 [2024-06-08 21:26:48.662703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.727 [2024-06-08 21:26:48.664978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.727 [2024-06-08 21:26:48.673839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.727 [2024-06-08 21:26:48.674492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.674944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-06-08 21:26:48.674953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.727 [2024-06-08 21:26:48.674961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.727 [2024-06-08 21:26:48.675050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.727 [2024-06-08 21:26:48.675174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.675182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.675189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.677242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.686424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.687023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.687457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.687467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.687475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.687636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.687797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.687805] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.687812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.690014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.698832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.699441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.699880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.699889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.699896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.700059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.700165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.700173] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.700184] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.702481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.711442] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.712145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.712602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.712616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.712626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.712777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.712925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.712933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.712941] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.715058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.724020] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.724640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.725076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.725086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.725094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.725237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.725344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.725352] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.725359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.727435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.736580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.737159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.737463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.737473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.737480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.737623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.737784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.737792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.737798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.740005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.748990] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.749340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.749799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.749810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.749817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.750015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.750176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.750183] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.750190] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.752488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.761621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.762234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.762675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.762685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.762692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.762835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.762959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.762967] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.728 [2024-06-08 21:26:48.762974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.728 [2024-06-08 21:26:48.765177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.728 [2024-06-08 21:26:48.774100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.728 [2024-06-08 21:26:48.774753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.775210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-06-08 21:26:48.775223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.728 [2024-06-08 21:26:48.775232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.728 [2024-06-08 21:26:48.775398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.728 [2024-06-08 21:26:48.775498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.728 [2024-06-08 21:26:48.775506] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.729 [2024-06-08 21:26:48.775514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.729 [2024-06-08 21:26:48.777872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.729 [2024-06-08 21:26:48.786501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.729 [2024-06-08 21:26:48.787194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.787662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.787677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.729 [2024-06-08 21:26:48.787687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.729 [2024-06-08 21:26:48.787812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.729 [2024-06-08 21:26:48.787922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.729 [2024-06-08 21:26:48.787930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.729 [2024-06-08 21:26:48.787937] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.729 [2024-06-08 21:26:48.790369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.729 [2024-06-08 21:26:48.798943] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.729 [2024-06-08 21:26:48.799502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.799969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.799982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.729 [2024-06-08 21:26:48.799991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.729 [2024-06-08 21:26:48.800172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.729 [2024-06-08 21:26:48.800318] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.729 [2024-06-08 21:26:48.800327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.729 [2024-06-08 21:26:48.800334] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.729 [2024-06-08 21:26:48.802765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.729 [2024-06-08 21:26:48.811301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.729 [2024-06-08 21:26:48.811902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.812451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-06-08 21:26:48.812465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.729 [2024-06-08 21:26:48.812474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.729 [2024-06-08 21:26:48.812655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.729 [2024-06-08 21:26:48.812802] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.729 [2024-06-08 21:26:48.812810] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.729 [2024-06-08 21:26:48.812817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.729 [2024-06-08 21:26:48.815154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.991 [2024-06-08 21:26:48.823779] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.991 [2024-06-08 21:26:48.824319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.991 [2024-06-08 21:26:48.824776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.991 [2024-06-08 21:26:48.824813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.991 [2024-06-08 21:26:48.824823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.991 [2024-06-08 21:26:48.825023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.991 [2024-06-08 21:26:48.825189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.991 [2024-06-08 21:26:48.825198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.991 [2024-06-08 21:26:48.825206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.991 [2024-06-08 21:26:48.827379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.991 [2024-06-08 21:26:48.836518] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.991 [2024-06-08 21:26:48.837141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.991 [2024-06-08 21:26:48.837671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.837707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.837718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.837900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.838028] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.838036] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.838044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.840124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.848881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.849268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.849667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.849703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.849715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.849880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.849952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.849960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.849968] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.852274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.861333] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.861720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.861931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.861940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.861948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.862148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.862312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.862319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.862326] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.864551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.873799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.874469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.874969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.874982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.874991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.875136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.875283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.875291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.875298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.877675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.886336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.886918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.887422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.887433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.887440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.887601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.887708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.887716] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.887723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.890033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.898924] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.899413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.899926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.899936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.899947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.900054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.900215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.900224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.900231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.902579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.911488] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.911953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.912382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.912392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.912399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.912585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.912748] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.912755] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.912762] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.914941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.923982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.924643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.925103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.992 [2024-06-08 21:26:48.925116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.992 [2024-06-08 21:26:48.925125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.992 [2024-06-08 21:26:48.925269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.992 [2024-06-08 21:26:48.925423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.992 [2024-06-08 21:26:48.925432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.992 [2024-06-08 21:26:48.925439] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.992 [2024-06-08 21:26:48.927664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.992 [2024-06-08 21:26:48.936258] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.992 [2024-06-08 21:26:48.936887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.937313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.937322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:48.937330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:48.937464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:48.937664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:48.937672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:48.937679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:48.939735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:48.948962] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:48.949675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.950139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.950153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:48.950163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:48.950326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:48.950497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:48.950506] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:48.950513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:48.952570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:48.961482] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:48.962093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.962633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.962669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:48.962680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:48.962825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:48.962953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:48.962962] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:48.962969] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:48.965255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:48.974085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:48.974806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.975267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.975279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:48.975288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:48.975457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:48.975664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:48.975673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:48.975680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:48.978067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:48.986435] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:48.987011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.987307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:48.987317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:48.987324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:48.987453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:48.987578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:48.987586] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:48.987593] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:48.989993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:48.998989] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:48.999683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.000138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.000151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:49.000160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:49.000323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:49.000494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:49.000504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:49.000511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:49.002879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:49.011505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:49.012123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.012366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.012376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:49.012384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:49.012513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:49.012676] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:49.012688] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:49.012695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:49.014894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:49.024099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:49.024743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.025201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.025214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.993 [2024-06-08 21:26:49.025223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.993 [2024-06-08 21:26:49.025367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.993 [2024-06-08 21:26:49.025500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.993 [2024-06-08 21:26:49.025509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.993 [2024-06-08 21:26:49.025516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.993 [2024-06-08 21:26:49.027800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.993 [2024-06-08 21:26:49.036546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.993 [2024-06-08 21:26:49.037187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.993 [2024-06-08 21:26:49.037647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.037662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.994 [2024-06-08 21:26:49.037671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.994 [2024-06-08 21:26:49.037834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.994 [2024-06-08 21:26:49.037999] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.994 [2024-06-08 21:26:49.038007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.994 [2024-06-08 21:26:49.038014] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.994 [2024-06-08 21:26:49.040314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.994 [2024-06-08 21:26:49.048945] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.994 [2024-06-08 21:26:49.049665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.049908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.049921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.994 [2024-06-08 21:26:49.049931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.994 [2024-06-08 21:26:49.050132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.994 [2024-06-08 21:26:49.050260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.994 [2024-06-08 21:26:49.050268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.994 [2024-06-08 21:26:49.050280] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.994 [2024-06-08 21:26:49.052583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.994 [2024-06-08 21:26:49.061598] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.994 [2024-06-08 21:26:49.062117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.062583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.062598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.994 [2024-06-08 21:26:49.062608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.994 [2024-06-08 21:26:49.062789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.994 [2024-06-08 21:26:49.062936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.994 [2024-06-08 21:26:49.062944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.994 [2024-06-08 21:26:49.062951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.994 [2024-06-08 21:26:49.065030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.994 [2024-06-08 21:26:49.073922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:10.994 [2024-06-08 21:26:49.074649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.075119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.994 [2024-06-08 21:26:49.075131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:10.994 [2024-06-08 21:26:49.075141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:10.994 [2024-06-08 21:26:49.075323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:10.994 [2024-06-08 21:26:49.075531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:10.994 [2024-06-08 21:26:49.075540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:10.994 [2024-06-08 21:26:49.075547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:10.994 [2024-06-08 21:26:49.077828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.256 [2024-06-08 21:26:49.086390] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.256 [2024-06-08 21:26:49.086966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.087405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.087416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.256 [2024-06-08 21:26:49.087423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.256 [2024-06-08 21:26:49.087602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.256 [2024-06-08 21:26:49.087746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.256 [2024-06-08 21:26:49.087754] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.256 [2024-06-08 21:26:49.087761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.256 [2024-06-08 21:26:49.090002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.256 [2024-06-08 21:26:49.099170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.256 [2024-06-08 21:26:49.099763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.100200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.100210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.256 [2024-06-08 21:26:49.100217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.256 [2024-06-08 21:26:49.100378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.256 [2024-06-08 21:26:49.100506] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.256 [2024-06-08 21:26:49.100514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.256 [2024-06-08 21:26:49.100521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.256 [2024-06-08 21:26:49.102737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.256 [2024-06-08 21:26:49.111692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.256 [2024-06-08 21:26:49.112325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.112952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.256 [2024-06-08 21:26:49.112989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.256 [2024-06-08 21:26:49.113000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.256 [2024-06-08 21:26:49.113144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.256 [2024-06-08 21:26:49.113254] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.256 [2024-06-08 21:26:49.113262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.256 [2024-06-08 21:26:49.113269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.115535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.124269] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.124913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.125358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.125368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.125377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.125507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.125634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.125641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.125648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.127994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.136760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.137225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.137643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.137680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.137691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.137835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.137983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.137991] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.137998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.140004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.149112] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.149782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.150113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.150126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.150135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.150279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.150415] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.150423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.150430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.152836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.161592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.161947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.162184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.162194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.162202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.162365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.162496] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.162505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.162511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.164618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.174224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.174818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.175037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.175047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.175054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.175197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.175340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.175348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.175355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.177599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.186744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.187320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.187750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.187761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.187768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.187965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.188072] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.188079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.188086] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.190212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.199517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.200197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.200660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.200674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.200683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.200864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.201010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.201018] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.201025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.257 [2024-06-08 21:26:49.203283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.257 [2024-06-08 21:26:49.211877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.257 [2024-06-08 21:26:49.212347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.212696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.257 [2024-06-08 21:26:49.212707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.257 [2024-06-08 21:26:49.212715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.257 [2024-06-08 21:26:49.212858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.257 [2024-06-08 21:26:49.212983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.257 [2024-06-08 21:26:49.212991] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.257 [2024-06-08 21:26:49.212998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.215305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.224440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.225016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.225486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.225497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.225504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.225665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.225808] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.225816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.225823] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.228020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.236868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.237448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.237882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.237891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.237899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.238040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.238165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.238172] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.238179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.240418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.249679] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.250244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.250723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.250734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.250745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.250907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.251031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.251039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.251045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.253373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.262389] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.262873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.263170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.263180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.263187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.263293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.263422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.263430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.263436] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.265598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 21:26:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:11.258 21:26:49 -- common/autotest_common.sh@852 -- # return 0 00:31:11.258 21:26:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:11.258 [2024-06-08 21:26:49.275036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 21:26:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:11.258 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.258 [2024-06-08 21:26:49.275501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.275954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.275963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.275970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.276058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.276220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.276228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.276234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.278473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.287558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.288152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.288597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.288608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.288616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.288759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.288902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.288909] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.288916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.291335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 [2024-06-08 21:26:49.300063] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 [2024-06-08 21:26:49.300645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.301072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.301082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.258 [2024-06-08 21:26:49.301091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.258 [2024-06-08 21:26:49.301253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.258 [2024-06-08 21:26:49.301378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.258 [2024-06-08 21:26:49.301386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.258 [2024-06-08 21:26:49.301393] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.258 [2024-06-08 21:26:49.303594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.258 21:26:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.258 [2024-06-08 21:26:49.312418] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.258 21:26:49 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:11.258 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.258 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.258 [2024-06-08 21:26:49.312803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.313242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.258 [2024-06-08 21:26:49.313251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.259 [2024-06-08 21:26:49.313259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.259 [2024-06-08 21:26:49.313425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.259 [2024-06-08 21:26:49.313532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.259 [2024-06-08 21:26:49.313540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.259 [2024-06-08 21:26:49.313546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.259 [2024-06-08 21:26:49.315746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.259 [2024-06-08 21:26:49.319361] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.259 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.259 21:26:49 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:11.259 [2024-06-08 21:26:49.324912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.259 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.259 [2024-06-08 21:26:49.325252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.259 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.259 [2024-06-08 21:26:49.325813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.259 [2024-06-08 21:26:49.325849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.259 [2024-06-08 21:26:49.325860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.259 [2024-06-08 21:26:49.326006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.259 [2024-06-08 21:26:49.326134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.259 [2024-06-08 21:26:49.326142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.259 [2024-06-08 21:26:49.326150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.259 [2024-06-08 21:26:49.328475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.259 [2024-06-08 21:26:49.337590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.259 [2024-06-08 21:26:49.338220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.259 [2024-06-08 21:26:49.338774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.259 [2024-06-08 21:26:49.338811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.259 [2024-06-08 21:26:49.338822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.259 [2024-06-08 21:26:49.339021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.259 [2024-06-08 21:26:49.339167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.259 [2024-06-08 21:26:49.339176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.259 [2024-06-08 21:26:49.339183] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.259 [2024-06-08 21:26:49.341448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.519 [2024-06-08 21:26:49.349937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.519 [2024-06-08 21:26:49.350370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.519 [2024-06-08 21:26:49.350685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.519 [2024-06-08 21:26:49.350698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.519 [2024-06-08 21:26:49.350706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.520 [2024-06-08 21:26:49.350888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.520 [2024-06-08 21:26:49.351031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.520 [2024-06-08 21:26:49.351039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.520 [2024-06-08 21:26:49.351046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.520 [2024-06-08 21:26:49.353259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.520 Malloc0 00:31:11.520 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.520 21:26:49 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.520 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.520 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.520 [2024-06-08 21:26:49.362372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.520 [2024-06-08 21:26:49.362787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-08 21:26:49.363224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-08 21:26:49.363234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.520 [2024-06-08 21:26:49.363242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.520 [2024-06-08 21:26:49.363370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.520 [2024-06-08 21:26:49.363519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.520 [2024-06-08 21:26:49.363527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.520 [2024-06-08 21:26:49.363534] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.520 [2024-06-08 21:26:49.365863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.520 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.520 21:26:49 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:11.520 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.520 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.520 [2024-06-08 21:26:49.374715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.520 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.520 [2024-06-08 21:26:49.375261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 21:26:49 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.520 [2024-06-08 21:26:49.375743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-08 21:26:49.375753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16909c0 with addr=10.0.0.2, port=4420 00:31:11.520 [2024-06-08 21:26:49.375761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16909c0 is same with the state(5) to be set 00:31:11.520 21:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.520 [2024-06-08 21:26:49.375885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16909c0 (9): Bad file descriptor 00:31:11.520 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:11.520 [2024-06-08 21:26:49.376084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:11.520 [2024-06-08 21:26:49.376093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:11.520 [2024-06-08 21:26:49.376099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.520 [2024-06-08 21:26:49.378297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.520 [2024-06-08 21:26:49.382359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.520 [2024-06-08 21:26:49.387111] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.520 21:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.520 21:26:49 -- host/bdevperf.sh@38 -- # wait 2576002 00:31:11.520 [2024-06-08 21:26:49.459209] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:21.567 00:31:21.567 Latency(us) 00:31:21.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:21.567 Verification LBA range: start 0x0 length 0x4000 00:31:21.567 Nvme1n1 : 15.00 14231.20 55.59 14599.69 0.00 4424.80 1071.79 17913.17 00:31:21.567 =================================================================================================================== 00:31:21.567 Total : 14231.20 55.59 14599.69 0.00 4424.80 1071.79 17913.17 00:31:21.567 21:26:58 -- host/bdevperf.sh@39 -- # sync 00:31:21.567 21:26:58 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.567 21:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.567 21:26:58 -- common/autotest_common.sh@10 -- # set +x 00:31:21.567 21:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.567 21:26:58 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:21.567 21:26:58 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:21.567 21:26:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:21.567 21:26:58 -- nvmf/common.sh@116 -- # sync 00:31:21.567 21:26:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:21.567 21:26:58 -- nvmf/common.sh@119 -- # set +e 00:31:21.567 21:26:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:21.567 21:26:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:21.567 rmmod nvme_tcp 00:31:21.567 rmmod nvme_fabrics 00:31:21.567 rmmod nvme_keyring 00:31:21.567 21:26:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:21.567 21:26:58 -- nvmf/common.sh@123 -- # set -e 00:31:21.567 21:26:58 -- nvmf/common.sh@124 -- # return 0 00:31:21.567 21:26:58 -- nvmf/common.sh@477 -- # '[' -n 2577031 ']' 00:31:21.567 21:26:58 -- nvmf/common.sh@478 -- # killprocess 2577031 00:31:21.567 21:26:58 -- common/autotest_common.sh@926 -- # '[' -z 2577031 ']' 00:31:21.567 21:26:58 -- common/autotest_common.sh@930 -- # kill -0 2577031 00:31:21.567 21:26:58 -- common/autotest_common.sh@931 -- # uname 00:31:21.567 21:26:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.567 21:26:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2577031 00:31:21.567 21:26:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:21.567 21:26:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:21.567 21:26:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2577031' 00:31:21.567 killing process with pid 2577031 00:31:21.567 21:26:58 -- common/autotest_common.sh@945 -- # kill 2577031 00:31:21.567 21:26:58 -- common/autotest_common.sh@950 -- # wait 2577031 00:31:21.567 21:26:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:21.567 21:26:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:21.567 21:26:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:21.567 21:26:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.567 21:26:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:21.567 21:26:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.567 21:26:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.567 21:26:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.510 21:27:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:22.510 00:31:22.510 real 0m27.659s 00:31:22.510 user 1m3.251s 00:31:22.510 sys 0m6.818s 00:31:22.510 21:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.510 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:22.510 ************************************ 00:31:22.510 END TEST nvmf_bdevperf 00:31:22.510 ************************************ 00:31:22.510 21:27:00 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:22.510 21:27:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:22.510 21:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:22.510 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:22.510 ************************************ 00:31:22.510 START TEST nvmf_target_disconnect 00:31:22.510 ************************************ 00:31:22.510 21:27:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:22.510 * Looking for test storage... 00:31:22.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.510 21:27:00 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.510 21:27:00 -- nvmf/common.sh@7 -- # uname -s 00:31:22.510 21:27:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.510 21:27:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.510 21:27:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.510 21:27:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.510 21:27:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.510 21:27:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.510 21:27:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.510 21:27:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.510 21:27:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.510 21:27:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.510 21:27:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.510 21:27:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.510 21:27:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.510 21:27:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.510 21:27:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.510 21:27:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.510 21:27:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.510 21:27:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.510 21:27:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.510 21:27:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.511 21:27:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.511 21:27:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.511 21:27:00 -- paths/export.sh@5 -- # export PATH 00:31:22.511 21:27:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.511 21:27:00 -- nvmf/common.sh@46 -- # : 0 00:31:22.511 21:27:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:22.511 21:27:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:22.511 21:27:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:22.511 21:27:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.511 21:27:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.511 21:27:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:22.511 21:27:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:22.511 21:27:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:22.511 21:27:00 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:22.511 21:27:00 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:22.511 21:27:00 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:22.511 21:27:00 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:22.511 21:27:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:22.511 21:27:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.511 21:27:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:22.511 21:27:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:22.511 21:27:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:22.511 21:27:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.511 21:27:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.511 21:27:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.511 21:27:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:22.511 21:27:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:22.511 21:27:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:22.511 21:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.665 21:27:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:30.665 21:27:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:30.665 21:27:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:30.665 21:27:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:30.665 21:27:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:30.665 21:27:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:30.665 21:27:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:30.665 21:27:07 -- nvmf/common.sh@294 -- # net_devs=() 00:31:30.665 21:27:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:30.665 21:27:07 -- nvmf/common.sh@295 -- # e810=() 00:31:30.665 21:27:07 -- nvmf/common.sh@295 -- # local -ga e810 00:31:30.665 21:27:07 -- nvmf/common.sh@296 -- # x722=() 00:31:30.665 21:27:07 -- nvmf/common.sh@296 -- # local -ga x722 00:31:30.665 21:27:07 -- nvmf/common.sh@297 -- # mlx=() 00:31:30.665 21:27:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:30.665 21:27:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.665 21:27:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:30.665 21:27:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:30.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:30.665 21:27:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:30.665 21:27:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:30.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:30.665 21:27:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:30.665 21:27:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.665 21:27:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.665 21:27:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:30.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:30.665 21:27:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:30.665 21:27:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.665 21:27:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.665 21:27:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:30.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:30.665 21:27:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:30.665 21:27:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:30.665 21:27:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.665 21:27:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.665 21:27:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:30.665 21:27:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.665 21:27:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.665 21:27:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:30.665 21:27:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.665 21:27:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.665 21:27:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:30.665 21:27:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:30.665 21:27:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.665 21:27:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.665 21:27:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.665 21:27:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.665 21:27:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:30.665 21:27:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.665 21:27:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.665 21:27:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.665 21:27:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:30.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.752 ms 00:31:30.665 00:31:30.665 --- 10.0.0.2 ping statistics --- 00:31:30.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.665 rtt min/avg/max/mdev = 0.752/0.752/0.752/0.000 ms 00:31:30.665 21:27:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:31:30.665 00:31:30.665 --- 10.0.0.1 ping statistics --- 00:31:30.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.665 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:31:30.665 21:27:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.665 21:27:07 -- nvmf/common.sh@410 -- # return 0 00:31:30.665 21:27:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:30.665 21:27:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.665 21:27:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:30.665 21:27:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.665 21:27:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:30.665 21:27:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:30.665 21:27:07 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:30.665 21:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:30.665 21:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.665 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.665 ************************************ 00:31:30.665 START TEST nvmf_target_disconnect_tc1 00:31:30.665 ************************************ 00:31:30.665 21:27:07 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:31:30.665 21:27:07 -- host/target_disconnect.sh@32 -- # set +e 00:31:30.665 21:27:07 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.665 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.665 [2024-06-08 21:27:07.767944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.665 [2024-06-08 21:27:07.768517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.665 [2024-06-08 21:27:07.768535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1794860 with addr=10.0.0.2, port=4420 00:31:30.665 [2024-06-08 21:27:07.768570] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:30.665 [2024-06-08 21:27:07.768580] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:30.665 [2024-06-08 21:27:07.768590] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:30.666 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:30.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:30.666 Initializing NVMe Controllers 00:31:30.666 21:27:07 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:30.666 21:27:07 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:30.666 21:27:07 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:31:30.666 21:27:07 -- common/autotest_common.sh@1132 -- # return 0 00:31:30.666 21:27:07 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:30.666 21:27:07 -- host/target_disconnect.sh@41 -- # set -e 00:31:30.666 00:31:30.666 real 0m0.104s 00:31:30.666 user 0m0.044s 00:31:30.666 sys 0m0.059s 00:31:30.666 21:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.666 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 ************************************ 00:31:30.666 END TEST nvmf_target_disconnect_tc1 00:31:30.666 ************************************ 00:31:30.666 21:27:07 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:30.666 21:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:30.666 21:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.666 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 ************************************ 00:31:30.666 START TEST nvmf_target_disconnect_tc2 00:31:30.666 ************************************ 00:31:30.666 21:27:07 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:31:30.666 21:27:07 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:30.666 21:27:07 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:30.666 21:27:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:30.666 21:27:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:30.666 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 21:27:07 -- nvmf/common.sh@469 -- # nvmfpid=2583207 00:31:30.666 21:27:07 -- nvmf/common.sh@470 -- # waitforlisten 2583207 00:31:30.666 21:27:07 -- common/autotest_common.sh@819 -- # '[' -z 2583207 ']' 00:31:30.666 21:27:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:30.666 21:27:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.666 21:27:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.666 21:27:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.666 21:27:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.666 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 [2024-06-08 21:27:07.886313] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:30.666 [2024-06-08 21:27:07.886376] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.666 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.666 [2024-06-08 21:27:07.971502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.666 [2024-06-08 21:27:08.063084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:30.666 [2024-06-08 21:27:08.063238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.666 [2024-06-08 21:27:08.063248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.666 [2024-06-08 21:27:08.063255] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.666 [2024-06-08 21:27:08.063449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:30.666 [2024-06-08 21:27:08.063676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:30.666 [2024-06-08 21:27:08.063839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:30.666 [2024-06-08 21:27:08.063840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:30.666 21:27:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:30.666 21:27:08 -- common/autotest_common.sh@852 -- # return 0 00:31:30.666 21:27:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:30.666 21:27:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:30.666 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 21:27:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.666 21:27:08 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:30.666 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.666 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 Malloc0 00:31:30.666 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.666 21:27:08 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:30.666 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.666 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.666 [2024-06-08 21:27:08.740610] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.666 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.666 21:27:08 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:30.666 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.666 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.927 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.927 21:27:08 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.927 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.927 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.927 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.927 21:27:08 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.927 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.927 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.927 [2024-06-08 21:27:08.780966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.927 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.927 21:27:08 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.927 21:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.927 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:31:30.927 21:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.927 21:27:08 -- host/target_disconnect.sh@50 -- # reconnectpid=2583479 00:31:30.927 21:27:08 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:30.927 21:27:08 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.927 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.841 21:27:10 -- host/target_disconnect.sh@53 -- # kill -9 2583207 00:31:32.841 21:27:10 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Read completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.841 Write completed with error (sct=0, sc=8) 00:31:32.841 starting I/O failed 00:31:32.842 [2024-06-08 21:27:10.813693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:32.842 [2024-06-08 21:27:10.814215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.814803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.814841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.815319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.815815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.815852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.816296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.816722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.816759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.817208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.817619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.817655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.818017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.818629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.818666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.819000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.819471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.819482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.819982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.820407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.820418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.820758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.821101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.821111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.821695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.822192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.822205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.822660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.823123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.823137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.823602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.824067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.824080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.824389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.824734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.824744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.825244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.825794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.825830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.826299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.826646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.826656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.827091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.827648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.827685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.827940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.828341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.828351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.828454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.828659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.828672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.828988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.829419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.829428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.829826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.830247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.830256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.830629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.831057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.831067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.831513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.831963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.831972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.832384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.832874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.832884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.833295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.833791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.833800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.834215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.834388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.834399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.842 qpair failed and we were unable to recover it. 00:31:32.842 [2024-06-08 21:27:10.834753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.842 [2024-06-08 21:27:10.835177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.835187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.835565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.836000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.836010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.836338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.836685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.836695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.837108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.837529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.837538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.838017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.838361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.838370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.838856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.839287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.839297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.839754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.840253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.840268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.840711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.841055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.841064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.841541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.841817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.841826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.842190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.842539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.842549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.842882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.843233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.843242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.843682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.844019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.844028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.844425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.844856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.844866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.845212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.845612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.845621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.846074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.846496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.846508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.846866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.847297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.847309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.847773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.848217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.848229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.848659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.849086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.849098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.849431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.849922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.849933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.850258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.850578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.850590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.851035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.851457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.851469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.851920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.852294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.852308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.852751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.853172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.853184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.853622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.854084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.854096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.854579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.854921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.843 [2024-06-08 21:27:10.854932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.843 qpair failed and we were unable to recover it. 00:31:32.843 [2024-06-08 21:27:10.855354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.855731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.855743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.856176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.856571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.856583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.856837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.857271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.857284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.857823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.858322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.858338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.858685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.859139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.859155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.859588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.859924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.859941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.860284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.860764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.860781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.861191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.861606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.861623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.862059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.862497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.862514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.862852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.863163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.863180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.863564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.864032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.864049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.864498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.864861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.864877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.865168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.865511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.865527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.865951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.866292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.866308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.866797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.867262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.867277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.867646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.867978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.867995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.868400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.868909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.868925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.869400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.869830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.869846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.870257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.870809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.870883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.871391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.871777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.871800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.872237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.872780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.872851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.873120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.873560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.873584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.874018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.874442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.874465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.874821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.875255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.875277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.875654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.875984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.844 [2024-06-08 21:27:10.876005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.844 qpair failed and we were unable to recover it. 00:31:32.844 [2024-06-08 21:27:10.876366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.876824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.876846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.877007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.877361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.877382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.877893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.878349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.878369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.878763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.879083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.879103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.879432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.879899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.879919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.880246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.880700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.880720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.881163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.881567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.881595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.882076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.882556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.882583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.882924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.883373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.883400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.883886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.884343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.884369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.884845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.885315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.885341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.885803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.886134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.886161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.886742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.887317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.887353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.887774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.888215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.888243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.888708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.888949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.888976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.889432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.889918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.889945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.890431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.890887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.890914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.891415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.891925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.891952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.892419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.892784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.892811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.893174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.893740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.893827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.894309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.894757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.894787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.895245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.895690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.895719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.896160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.896600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.896627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.897009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.897359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.897392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.897918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.898370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.845 [2024-06-08 21:27:10.898397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.845 qpair failed and we were unable to recover it. 00:31:32.845 [2024-06-08 21:27:10.898830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.899177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.899203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.899666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.900102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.900129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.900503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.900967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.900993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.901317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.901698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.901724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.902171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.902600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.902628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.903094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.903603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.903631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.904017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.904482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.904509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.904965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.905411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.905439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.905967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.906396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.906433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.906861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.907327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.907353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.907950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.908503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.908542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.909035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.909608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.909697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.910258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.910728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.910757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.911219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.911674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.911701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.912060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.912523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.912550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.913018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.913530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.913557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.914007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.914442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.914480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.914866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.915221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.915252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.915711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.916034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.916060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.916542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.916945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.916971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.846 qpair failed and we were unable to recover it. 00:31:32.846 [2024-06-08 21:27:10.917448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.846 [2024-06-08 21:27:10.917959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.917986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.918441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.918963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.918989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.919446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.919908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.919935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.920439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.920911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.920938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.921427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.921874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.921900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.922347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.922786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.922812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.923289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.923754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.923800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.924270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.924715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.924744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.925197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.925634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.925661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.926111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.926558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.926584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.927063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.927505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.927532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.927995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.928435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.928463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.928932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.929334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.847 [2024-06-08 21:27:10.929360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:32.847 qpair failed and we were unable to recover it. 00:31:32.847 [2024-06-08 21:27:10.929764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.930200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.930229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.930696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.931136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.931162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.931584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.932051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.932076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.932455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.932792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.932826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.933327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.933666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.933693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.934138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.934586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.934612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.935082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.935643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.935670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.936070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.936432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.936459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.936931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.937367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.937393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.937898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.938333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.938359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.938866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.939343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.939369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.939948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.940379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.940414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.940771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.941226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.941251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.941691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.942127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.942160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.942531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.943009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.943037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.943499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.944023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.944050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.114 [2024-06-08 21:27:10.944425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.944900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.114 [2024-06-08 21:27:10.944927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.114 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.945315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.945705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.945733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.946115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.946554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.946583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.947070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.947500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.947528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.948012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.948339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.948365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.948726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.949075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.949101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.949576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.949969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.949996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.950344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.950696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.950727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.951185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.951644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.951671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.952114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.952543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.952569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.953008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.953440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.953468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.953847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.954187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.954216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.954691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.955133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.955159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.955646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.956082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.956109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.956562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.957003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.957029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.957495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.957953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.957979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.958422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.958803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.958829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.959214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.959694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.959721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.960194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.960656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.960683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.961054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.961489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.961517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.962042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.962514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.962542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.963005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.963443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.963470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.963960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.964399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.964435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.964922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.965360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.965386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.115 [2024-06-08 21:27:10.965866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.966300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.115 [2024-06-08 21:27:10.966327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.115 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.966733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.967171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.967197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.967758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.968308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.968344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.968855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.969291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.969318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.969818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.970253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.970279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.970746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.971098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.971123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.971576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.971928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.971963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.972425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.972833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.972859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.973326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.973649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.973676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.974162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.974603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.974631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.975086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.975573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.975601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.976072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.976358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.976385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.976905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.977342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.977369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.977863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.978368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.978394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.978889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.979327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.979354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.979811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.980276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.980303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.980781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.981291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.981317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.981783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.982219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.982246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.982744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.983180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.983207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.983830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.984385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.984441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.984903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.985354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.985381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.985858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.986328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.986354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.986857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.987443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.987482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.987982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.988424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.988453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.988941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.989392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.989430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.116 [2024-06-08 21:27:10.989906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.990245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.116 [2024-06-08 21:27:10.990272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.116 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.990902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.991589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.991679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.992214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.992666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.992756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.993192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.993685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.993714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.994180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.994734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.994823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.995346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.995802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.995832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.996303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.996733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.996762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.997219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.997789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.997878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.998466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.998971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.999000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:10.999482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.999956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:10.999982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.000347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.000827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.000856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.001220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.001667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.001695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.002131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.002689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.002779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.003389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.003841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.003870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.004332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.004683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.004720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.005258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.005605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.005633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.006174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.006646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.006673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.007146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.007597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.007625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.008096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.008534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.008563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.008920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.009393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.009430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.009899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.010255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.010281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.010636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.011000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.011026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.011488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.011967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.011993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.012460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.012941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.012968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.013425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.013841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.117 [2024-06-08 21:27:11.013868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.117 qpair failed and we were unable to recover it. 00:31:33.117 [2024-06-08 21:27:11.014355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.014825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.014852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.015321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.015854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.015882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.016337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.016775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.016803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.017280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.017741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.017768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.018241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.018775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.018866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.019427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.019899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.019927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.020379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.020877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.020905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.021358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.021902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.021991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.022666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.023216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.023253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.023745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.024200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.024227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.024785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.025346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.025384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.025893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.026333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.026361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.026821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.027157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.027184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.027761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.028313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.028349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.028825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.029306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.029334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.029806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.030269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.030296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.030776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.031108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.031135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.031582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.031952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.031978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.032464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.032918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.032945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.033411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.033886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.033912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.034382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.034827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.034854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.035325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.035792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.035821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.036278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.036753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.036781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.037258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.037606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.037633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.038114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.038584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.118 [2024-06-08 21:27:11.038612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.118 qpair failed and we were unable to recover it. 00:31:33.118 [2024-06-08 21:27:11.039087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.039552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.039579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.040056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.040492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.040519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.040981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.041418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.041445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.041926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.042378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.042431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.042887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.043308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.043334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.043832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.044296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.044322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.044942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.045431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.045469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.045971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.046458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.046501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.046981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.047452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.047482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.047844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.048196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.048232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.048687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.049153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.049180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.049746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.050305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.050342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.050857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.051313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.051342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.051820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.052264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.052290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.052669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.053131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.053158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.053615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.054006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.054033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.054599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.055031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.055058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.055537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.056022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.056048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.056498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.056973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.119 [2024-06-08 21:27:11.057001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.119 qpair failed and we were unable to recover it. 00:31:33.119 [2024-06-08 21:27:11.057470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.057878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.057906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.058271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.058723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.058750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.059212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.059689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.059716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.060086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.060460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.060487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.061002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.061460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.061487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.061984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.062426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.062454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.062929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.063370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.063397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.063949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.064433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.064462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.064843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.065321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.065347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.065826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.066178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.066203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.066544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.067007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.067045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.067465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.067950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.067975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.068371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.068657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.068685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.069169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.069704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.069732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.070183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.070764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.070855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.071306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.071722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.071756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.072187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.072536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.072564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.073031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.073367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.073393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.073906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.074339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.074366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.074832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.075264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.075291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.075843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.076275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.076313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.076780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.077259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.077286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.077792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.078259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.078285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.078804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.079189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.079223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.079712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.080146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.120 [2024-06-08 21:27:11.080172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.120 qpair failed and we were unable to recover it. 00:31:33.120 [2024-06-08 21:27:11.080629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.081080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.081106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.081567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.082004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.082030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.082440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.082812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.082845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.083300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.083757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.083786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.084243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.084592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.084624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.084971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.085313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.085347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.085756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.086090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.086117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.086571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.087101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.087127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.087602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.088037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.088064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.088532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.089004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.089030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.089485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.089926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.089952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.090453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.090873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.090900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.091376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.091741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.091769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.092223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.092672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.092702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.093154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.093598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.093625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.094081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.094551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.094584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.094962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.095287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.095314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.095785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.096317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.096343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.096637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.097024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.097050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.097397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.097835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.097861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.098365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.098707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.098734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.099208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.099747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.099838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.100424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.100864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.100893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.101361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.101978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.102068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.121 qpair failed and we were unable to recover it. 00:31:33.121 [2024-06-08 21:27:11.102732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.121 [2024-06-08 21:27:11.103291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.103328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.103838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.104282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.104309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.104712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.105146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.105173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.105730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.106213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.106254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.106796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.107240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.107267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.107796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.108230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.108257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.108719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.109173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.109200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.109670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.110031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.110057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.110536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.110984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.111010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.111459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.111968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.111994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.112451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.112961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.112987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.113456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.113890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.113917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.114360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.114836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.114864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.115309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.115815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.115843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.116303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.116859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.116885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.117366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.117830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.117857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.118327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.118821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.118848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.119332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.119808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.119838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.120211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.120753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.120845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.121397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.121912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.121941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.122462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.122985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.123012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.123620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.124099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.124136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.124752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.125174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.125200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.125591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.126008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.126039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.126617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.127106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.122 [2024-06-08 21:27:11.127150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.122 qpair failed and we were unable to recover it. 00:31:33.122 [2024-06-08 21:27:11.127705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.128174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.128202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.128686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.129254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.129292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.129841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.130302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.130329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.130794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.131301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.131328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.131782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.132253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.132280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.132755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.133273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.133299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.133770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.134258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.134284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.134811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.135254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.135280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.135834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.136188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.136214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.136697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.137139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.137165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.137671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.138235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.138273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.138715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.139170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.139197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.139688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.140132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.140159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.140739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.141294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.141332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.141858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.142329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.142356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.142852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.143300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.143326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.143785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.144226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.144253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.144718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.145208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.145235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.145798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.146455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.146495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.146982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.147442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.147474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.147973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.148433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.148466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.148926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.149369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.149396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.149874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.150341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.150367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.150851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.151313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.151340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.151827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.152276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.123 [2024-06-08 21:27:11.152302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.123 qpair failed and we were unable to recover it. 00:31:33.123 [2024-06-08 21:27:11.152811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.153341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.153368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.153912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.154180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.154221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.154818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.155267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.155294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.155695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.156132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.156158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.156576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.157050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.157077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.157560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.158019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.158045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.158416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.158905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.158931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.159273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.159608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.159639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.160120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.160570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.160598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.161134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.161602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.161629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.162115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.162561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.162587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.162972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.163418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.163445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.163893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.164340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.164366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.164951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.165664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.165757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.166314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.166783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.166812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.167279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.167671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.167699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.168055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.168504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.168531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.169010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.169496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.169523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.169878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.170369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.170399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.170874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.171323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.171349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.171739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.172187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.172214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.124 qpair failed and we were unable to recover it. 00:31:33.124 [2024-06-08 21:27:11.172692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.173139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.124 [2024-06-08 21:27:11.173165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.173748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.174312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.174348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.174872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.175241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.175268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.175634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.176010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.176046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.176533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.177008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.177035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.177527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.178005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.178032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.178490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.178847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.178873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.179337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.179618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.179646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.180109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.180470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.180499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.180982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.181438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.181466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.181955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.182398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.182434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.182927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.183288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.183314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.183596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.184084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.184111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.125 [2024-06-08 21:27:11.184592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.184944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.125 [2024-06-08 21:27:11.184970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.125 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.185451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.185914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.185941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.186421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.186883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.186909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.187391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.187872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.187899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.188652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.189217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.189254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.189736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.190196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.190224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.190739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.191186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.191212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.191713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.192207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.192248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.192810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.193203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.193232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.193660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.194114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.194140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.194672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.195236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.195272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.195782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.196259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.196286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.196708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.197076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.197102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.197597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.198051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.198077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.198570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.199018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.199044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.126 [2024-06-08 21:27:11.199418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.199921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.126 [2024-06-08 21:27:11.199947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.126 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.200462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.200970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.200997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.201483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.201940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.201966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.202437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.202927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.202954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.203326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.203849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.203877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.204360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.204905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.204935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.205419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.205788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.205827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.206278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.206840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.206867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.207330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.207831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.207858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.208230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.208747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.208840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.393 qpair failed and we were unable to recover it. 00:31:33.393 [2024-06-08 21:27:11.209426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.393 [2024-06-08 21:27:11.209919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.209947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.210423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.211084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.211180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.211818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.212384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.212450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.212884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.213466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.213514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.213933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.214397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.214434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.214965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.215329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.215369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.215849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.216354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.216382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.216755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.217221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.217250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.217660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.218136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.218162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.218634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.219109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.219135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.219589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.220040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.220067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.220557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.221024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.221050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.221552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.222014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.222041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.222589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.222951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.222986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.223392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.224004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.224031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.224678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.225232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.225270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.225829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.226291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.226317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.226863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.227348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.227375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.227842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.228357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.228384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.228789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.229261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.229288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.229795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.230245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.230271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.230748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.231195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.231222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.231886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.232611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.232706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.233266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.233657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.233709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.394 [2024-06-08 21:27:11.234115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.234569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.394 [2024-06-08 21:27:11.234598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.394 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.235089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.235569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.235596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.236093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.236552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.236579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.237072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.237522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.237550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.237997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.238446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.238474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.238879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.239326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.239352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.239829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.240276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.240302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.240692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.241177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.241204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.241712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.242162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.242188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.242617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.243100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.243135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.243702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.244317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.244354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.244876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.245329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.245357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.245781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.246228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.246255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.246806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.247292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.247318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.247829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.248313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.248340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.248840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.249321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.249349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.249922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.250650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.250748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.251367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.251908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.251937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.252329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.252600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.252640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.253138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.253664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.253761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.254218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.254616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.254647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.255081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.255533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.255560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.256092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.256461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.256489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.256884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.257274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.257314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.257811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.258278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.258305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.395 qpair failed and we were unable to recover it. 00:31:33.395 [2024-06-08 21:27:11.258667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.395 [2024-06-08 21:27:11.259144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.259172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.259652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.260105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.260131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.260616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.261073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.261099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.261576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.262032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.262059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.262531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.262984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.263010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.263497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.263968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.263994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.264472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.264936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.264964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.265468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.265927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.265954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.266329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.266836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.266864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.267355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.267804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.267831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.268316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.268839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.268866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.269332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.269863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.269891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.270264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.270741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.270769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.271250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.271796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.271893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.272462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.272968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.272996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.273494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.273976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.274003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.274467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.274918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.274946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.275423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.275894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.275920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.276377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.276864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.276892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.277364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.277853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.277880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.278256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.278744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.278773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.279176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.279649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.279676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.280163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.280651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.280680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.281142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.281718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.281818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.282276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.282782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.282812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.396 qpair failed and we were unable to recover it. 00:31:33.396 [2024-06-08 21:27:11.283333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.396 [2024-06-08 21:27:11.283794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.283822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.284314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.284774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.284802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.285276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.285759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.285788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.286281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.286767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.286795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.287322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.287783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.287810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.288297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.288728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.288755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.289250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.289807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.289907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.290607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.291248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.291285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.291790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.292248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.292275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.292760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.293214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.293240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.293694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.294176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.294202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.294588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.295057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.295084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.295560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.296044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.296070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.296560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.297021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.297047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.297519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.297988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.298014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.298496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.298978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.299004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.299491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.299864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.299890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.300371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.300868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.300897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.301394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.301890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.301916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.302399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.302895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.302921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.303425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.303907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.303934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.304433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.304941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.304967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.305600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.306235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.306273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.306810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.307279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.307306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.307813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.308277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.308303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.397 qpair failed and we were unable to recover it. 00:31:33.397 [2024-06-08 21:27:11.308778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.397 [2024-06-08 21:27:11.309237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.309264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.309826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.310321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.310347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.310850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.311330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.311356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.311863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.312113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.312148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.312648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.313108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.313136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.313654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.314033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.314062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.314553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.315014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.315041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.315543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.316003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.316031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.316528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.317008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.317035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.317510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.318005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.318031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.318510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.318893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.318919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.319412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.319925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.319951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.320441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.320903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.320929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.321481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.321960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.321986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.322462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.322925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.322952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.323441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.324010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.324037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.324528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.325018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.325045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.325525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.325991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.326017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.326495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.326953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.326980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.327399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.327956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.327985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.328484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.328990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.398 [2024-06-08 21:27:11.329017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.398 qpair failed and we were unable to recover it. 00:31:33.398 [2024-06-08 21:27:11.329492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.329952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.329979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.330465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.330932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.330959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.331454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.331914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.331942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.332442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.332800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.332833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.333219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.333735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.333763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.334273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.334759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.334786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.335276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.335737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.335764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.336259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.336789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.336816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.337293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.337787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.337815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.338317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.338690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.338718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.339216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.339680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.339707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.340169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.340718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.340820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.341275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.341789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.341819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.342285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.342766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.342794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.343296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.343779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.343808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.344300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.344785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.344813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.345199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.347559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.347619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.348128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.348604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.348633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.349133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.349516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.349543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.350122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.350590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.350618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.351100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.351471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.351521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.352041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.352507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.352535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.353028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.353576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.353604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.354063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.354528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.354556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.354928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.355395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.399 [2024-06-08 21:27:11.355453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.399 qpair failed and we were unable to recover it. 00:31:33.399 [2024-06-08 21:27:11.355940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.356400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.356453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.356838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.357316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.357343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.357823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.358255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.358282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.358747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.359219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.359246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.359650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.360121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.360147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.360658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.361123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.361149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.361636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.362099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.362125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.362703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.363330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.363367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.363915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.364384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.364461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.364996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.365621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.365722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.366184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.366775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.366877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.367655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.368283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.368321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.368878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.369260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.369300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.369783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.370249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.370276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.370803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.371266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.371293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.371808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.372295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.372323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.372681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.373165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.373193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.373677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.374073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.374099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.374565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.375027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.375054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.375534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.376010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.376048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.376411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.376906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.376934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.377440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.377960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.377987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.378481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.378943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.378969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.379450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.379886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.379912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.400 [2024-06-08 21:27:11.380393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.380785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.400 [2024-06-08 21:27:11.380811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.400 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.381193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.381662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.381689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.382170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.382768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.382868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.383459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.383868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.383901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.384294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.384779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.384807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.385297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.385783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.385824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.386368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.386867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.386895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.387328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.387869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.387896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.388377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.388910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.388938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.389441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.389933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.389960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.390456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.390912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.390940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.391440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.391902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.391929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.392433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.392911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.392939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.393430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.393898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.393925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.394422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.394898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.394925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.395417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.395905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.395938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.396470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.396982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.397009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.397481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.397945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.397971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.398469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.398936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.398963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.399459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.399962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.399988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.400474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.400960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.400986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.401487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.401987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.402013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.402505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.402991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.403017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.403503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.403954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.403982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.404484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.404987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.405014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.401 [2024-06-08 21:27:11.405484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.405962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.401 [2024-06-08 21:27:11.405996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.401 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.406470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.406847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.406873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.407380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.407854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.407881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.408359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.408820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.408848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.409339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.409750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.409777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.410268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.410634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.410664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.411214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.411670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.411698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.412197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.412757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.412860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.413460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.413950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.413978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.414601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.415188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.415225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.415748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.416124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.416152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.416602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.416972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.416998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.417468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.417831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.417863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.418355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.418719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.418746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.419251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.419714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.419742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.420229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.420607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.420634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.421060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.421543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.421571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.421947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.422429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.422455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.422954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.423422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.423449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.423849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.424341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.424367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.424887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.425348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.425375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.425632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.426155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.426181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.426784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.427375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.427429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.427942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.428417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.428446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.428930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.429397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.429439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.402 qpair failed and we were unable to recover it. 00:31:33.402 [2024-06-08 21:27:11.429906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.430419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.402 [2024-06-08 21:27:11.430448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.430984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.431450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.431477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.431956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.432468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.432517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.432922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.433386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.433441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.433941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.434467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.434518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.435023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.435487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.435515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.435980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.436439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.436466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.436949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.437434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.437462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.437957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.438420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.438447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.438957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.439423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.439451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.439938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.440414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.440442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.441027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.441386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.441427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.441817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.442282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.442308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.442884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.443607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.443709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.444217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.444795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.444896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.445658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.446285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.446323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.446946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.447417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.447446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.447936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.448398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.448436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.448982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.449480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.449532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.449986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.450354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.450389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.450789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.451316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.451344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.451841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.452304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.452331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.452873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.453330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.453358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.453850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.454206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.454233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.454793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.455429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.455467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.403 [2024-06-08 21:27:11.455925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.456390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.403 [2024-06-08 21:27:11.456449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.403 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.456987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.457484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.457539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.458041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.458631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.458733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.459312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.459875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.459904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.460380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.460910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.460938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.461443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.461920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.461947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.462335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.462692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.462721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.463203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.463763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.463865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.464458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.464976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.465005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.465468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.465929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.465956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.466452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.466909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.466936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.467439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.467801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.467841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.468261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.468801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.468829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.469306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.469784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.469812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.470307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.470775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.470803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.471295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.471775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.471802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.472279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.472772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.472800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.473296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.473798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.473825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.474300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.474784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.474811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.475308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.475778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.475805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.476280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.476764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.476791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.477265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.477751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.404 [2024-06-08 21:27:11.477779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.404 qpair failed and we were unable to recover it. 00:31:33.404 [2024-06-08 21:27:11.478269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.478773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.478804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.479332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.479830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.479859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.480342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.480844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.480871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.481347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.481723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.481750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.482255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.482718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.482745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.483207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.483690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.483792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.484261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.484732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.484762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.485241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.485711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.485739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.486232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.486792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.486894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.487608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.488197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.488235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.488741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.489128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.489156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.489620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.490095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.490122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.490526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.491025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.491052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.491446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.491955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.491982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.492468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.492963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.492990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.493474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.493939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.493966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.494456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.494843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.494869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.495355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.495817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.495845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.496343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.496812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.496839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.497333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.497845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.497873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.498248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.498828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.498930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.499605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.500237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.500274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.500826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.501178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.501205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-06-08 21:27:11.501645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.502131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.671 [2024-06-08 21:27:11.502159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.502660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.503124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.503150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.503748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.504334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.504371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.504837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.505330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.505358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.505866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.506302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.506329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.506809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.507270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.507297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.507765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.508165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.508193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.508720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.509352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.509391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.509969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.510446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.510479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.510976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.511463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.511491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.511854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.512317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.512344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.512714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.513209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.513237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.513574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.514024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.514050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.514432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.514930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.514960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.515458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.515923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.515950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.516325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.516840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.516869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.517347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.517861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.517890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.518380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.518883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.518912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.519386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.519914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.519941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.520432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.520888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.520914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.521320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.521777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.521805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.522285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.522769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.522796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.523294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.523797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.523825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.524221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.524808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.524909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.525503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.525898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.525926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-06-08 21:27:11.526413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.526862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.672 [2024-06-08 21:27:11.526889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.527365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.527875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.527903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.528388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.528894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.528921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.529424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.529909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.529935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.530433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.530901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.530927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.531422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.531876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.531902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.532381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.532936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.533038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.533639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.534234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.534272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.534834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.535202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.535228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.535800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.536386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.536439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.536941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.537416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.537444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.537933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.538399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.538451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.538943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.539413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.539442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.540040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.540722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.540822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.541290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.541834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.541865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.542343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.542712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.542749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.543236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.543796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.543898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.544653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.545287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.545324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.545839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.546302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.546330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.546712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.547121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.547156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.547542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.548002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.548030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.548422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.548986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.549026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.549396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.549949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.549977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.550474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.551006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.551033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.551622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.552168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.552206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.673 [2024-06-08 21:27:11.552747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.553346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.673 [2024-06-08 21:27:11.553385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.673 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.553968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.554432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.554463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.554909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.555379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.555414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.555887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.556284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.556310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.556922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.557620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.557722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.558316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.558799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.558830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.559386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.559893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.559932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.560300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.560851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.560952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.561527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.562025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.562053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.562532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.562982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.563009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.563467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.563970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.563997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.564396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.564871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.564898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.565360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.565819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.565846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.566227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.566642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.566695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.567099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.567608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.567634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.568059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.568546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.568594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.569104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.569568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.569607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.570124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.570602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.570631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.571015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.571500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.571530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.572037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.572538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.572566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.573046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.573506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.573533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.574040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.574508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.574536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.575028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.575475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.575502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.575866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.576265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.576291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.576772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.577240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.577266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.674 qpair failed and we were unable to recover it. 00:31:33.674 [2024-06-08 21:27:11.577636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.578110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.674 [2024-06-08 21:27:11.578136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.578613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.579092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.579118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.579609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.580094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.580121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.580608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.581067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.581093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.581573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.582054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.582082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.582547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.583009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.583036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.583533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.584003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.584030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.584526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.585016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.585042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.585518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.586004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.586031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.586414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.586948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.586975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.587478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.587942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.587968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.588459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.588921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.588947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.589431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.589890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.589916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.590398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.590886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.590914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.591425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.591876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.591903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.592393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.592892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.592919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.593426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.593804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.593831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.594277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.594834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.594937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.595664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.596246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.596283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.596687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.597188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.597216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.597694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.598155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.598182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.598813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.599398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.599466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.599989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.600473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.600526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.601037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.601626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.601729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.602321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.602800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.602830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.675 [2024-06-08 21:27:11.603304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.603781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.675 [2024-06-08 21:27:11.603809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.675 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.604307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.604800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.604827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.605306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.605793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.605822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.606317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.606770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.606797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.607282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.607761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.607788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.608286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.608682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.608710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.609255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.609714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.609743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.610237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.610701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.610729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.611206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.611762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.611863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.612337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.612829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.612859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.613272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.613751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.613780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.614255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.614630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.614673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.615172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.615659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.615687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.616051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.616549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.616577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.617076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.617537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.617565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.618074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.618539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.618566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.619065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.619423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.619450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.619963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.620454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.620481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.620995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.621456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.621484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.621970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.622425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.622452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.622921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.623381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.676 [2024-06-08 21:27:11.623425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.676 qpair failed and we were unable to recover it. 00:31:33.676 [2024-06-08 21:27:11.623914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.624411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.624440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.624935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.625396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.625434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.625921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.626379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.626414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.626907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.627367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.627394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.627910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.628267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.628300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.628680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.629128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.629155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.629643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.630100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.630126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.630709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.631338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.631376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.631889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.632342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.632370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.632850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.633321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.633349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.633851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.634348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.634376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.634851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.635338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.635365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.635825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.636319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.636347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.636856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.637344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.637371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.637857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.638338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.638366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.638885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.639377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.639418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.639846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.640348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.640378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.640773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.641305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.641333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.641824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.642310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.642338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.642750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.643267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.643294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.643857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.644474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.644537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.645060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.645542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.645572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.645988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.646475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.646502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.647007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.647387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.647436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.677 [2024-06-08 21:27:11.647939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.648391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.677 [2024-06-08 21:27:11.648427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.677 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.648900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.649381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.649418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.649970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.650418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.650447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.650958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.651429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.651457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.651955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.652440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.652467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.652968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.653421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.653448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.653939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.654408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.654435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.655023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.655624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.655725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.656307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.656966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.657067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.657630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.658250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.658288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.658752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.659210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.659237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.659785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.660373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.660448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.661008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.661430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.661471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.662049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.662465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.662516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.663045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.663633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.663734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.664318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.664692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.664721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.665182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.665707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.665809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.666268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.666780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.666811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.667303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.667856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.667885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.668383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.668874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.668902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.669276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.669673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.669710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.670187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.670690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.670718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.671195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.671713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.671743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.672241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.672802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.672904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.673604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.674196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.674234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.678 [2024-06-08 21:27:11.674780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.675243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.678 [2024-06-08 21:27:11.675270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.678 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.675754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.676214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.676241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.676749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.677213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.677239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.677735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.678367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.678426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.678998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.679660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.679762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.680349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.680861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.680891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.681274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.681682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.681709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.682187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.682773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.682875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.683349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.683862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.683892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.684412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.684900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.684928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.685289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.685854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.685955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.686658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.687294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.687331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.687856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.688318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.688345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.688878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.689341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.689368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.689757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.690157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.690183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.690752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.691338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.691376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.691898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.692385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.692428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.692964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.693608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.693709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.694294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.694813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.694844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.695340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.695910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.696010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.696468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.696864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.696898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.697415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.697900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.697927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.698419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.698885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.698912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.699449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.699949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.699977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.700469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.701007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.701034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.701647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.702290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.679 [2024-06-08 21:27:11.702328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.679 qpair failed and we were unable to recover it. 00:31:33.679 [2024-06-08 21:27:11.702844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.703306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.703334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.703835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.704324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.704363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.704868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.705329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.705356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.705902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.706367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.706393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.706871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.707357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.707384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.707886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.708423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.708451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.708940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.709399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.709443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.709943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.710399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.710438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.710908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.711376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.711410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.711876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.712341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.712368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.712768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.713134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.713161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.713737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.714315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.714364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.714877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.715349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.715377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.715969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.716672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.716774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.717352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.717942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.717973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.718606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.719231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.719268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.719849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.720234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.720274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.720707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.721189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.721215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.721713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.722079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.722111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.722589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.723048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.723076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.723496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.723987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.724014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.724507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.724990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.725029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.725505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.726002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.726028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.726542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.727005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.727032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.727499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.727979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.728006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.680 qpair failed and we were unable to recover it. 00:31:33.680 [2024-06-08 21:27:11.728495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.680 [2024-06-08 21:27:11.728953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.728980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.729454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.729919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.729945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.730433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.730813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.730840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.731338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.731801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.731828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.732389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.732892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.732919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.733431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.733880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.733907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.734386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.734890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.734917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.735396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.735889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.735916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.736416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.736903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.736929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.737306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.737869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.737972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.738657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.739259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.739298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.739799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.740269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.740297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.740860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.741313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.741341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.741846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.742307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.742335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.742763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.743130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.743162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.743545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.743947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.743981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.744347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.744807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.744835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.745383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.745901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.745929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.746424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.746900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.746927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.747316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.747826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.747928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.748646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.749156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.749194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.681 [2024-06-08 21:27:11.749725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.750192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.681 [2024-06-08 21:27:11.750220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.681 qpair failed and we were unable to recover it. 00:31:33.682 [2024-06-08 21:27:11.750883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.751663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.751767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.682 qpair failed and we were unable to recover it. 00:31:33.682 [2024-06-08 21:27:11.752352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.752854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.752887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.682 qpair failed and we were unable to recover it. 00:31:33.682 [2024-06-08 21:27:11.753382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.753823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.753852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.682 qpair failed and we were unable to recover it. 00:31:33.682 [2024-06-08 21:27:11.754340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.754966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.682 [2024-06-08 21:27:11.755067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.682 qpair failed and we were unable to recover it. 00:31:33.682 [2024-06-08 21:27:11.755753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.758532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.758610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.948 qpair failed and we were unable to recover it. 00:31:33.948 [2024-06-08 21:27:11.759205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.759760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.759862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.948 qpair failed and we were unable to recover it. 00:31:33.948 [2024-06-08 21:27:11.763125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.763775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.948 [2024-06-08 21:27:11.763878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.948 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.764430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.768421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.768470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.768988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.769463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.769495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.770007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.770472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.770503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.771024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.771501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.771531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.772072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.772527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.772557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.773116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.773582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.773611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.774104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.774580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.774610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.775114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.775583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.775613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.776114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.776620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.776678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.777116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.777601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.777627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.778106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.778564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.778589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.779139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.779525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.779549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.779999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.780453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.780473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.780972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.781424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.781445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.781963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.782414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.782434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.782899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.783353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.783373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.783880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.784338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.784358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.784877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.785333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.785353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.785861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.786343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.786364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.786851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.787337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.787357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.787874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.788349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.788378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.788809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.789198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.789219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.789701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.790699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.790748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.791179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.791523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.791553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.949 qpair failed and we were unable to recover it. 00:31:33.949 [2024-06-08 21:27:11.792035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.949 [2024-06-08 21:27:11.792502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.792530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.793012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.793473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.793503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.794632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.795144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.795174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.796442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.796970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.796998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.797520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.798030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.798057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.798456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.798931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.798960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.799443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.799859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.799886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.800445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.800967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.800994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.801495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.801971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.801998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.802490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.802973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.803000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.803476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.803941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.803968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.804530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.805034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.805061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.805556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.806045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.806072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.806603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.807066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.807093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.807495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.807994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.808020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.808512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.809027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.809054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.809530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.809933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.809960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.810449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.810933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.810960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.811431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.811908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.811935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.812329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.812705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.812732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.813099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.813460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.813491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.813999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.814507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.814533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.815031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.815492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.815522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.816017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.816503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.816533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.817047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.817545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.950 [2024-06-08 21:27:11.817576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.950 qpair failed and we were unable to recover it. 00:31:33.950 [2024-06-08 21:27:11.818070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.818557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.818587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.819090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.819595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.819626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.820114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.820592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.820624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.821103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.821496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.821528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.822011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.822534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.822565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.822835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.823219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.823250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.823717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.824211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.824242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.824710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.825058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.825088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.825446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.825983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.826013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.826525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.827017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.827047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.827543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.828040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.828070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.828582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.829098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.829127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.829584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.830110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.830140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.830644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.831162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.831191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.831683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.832120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.832150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.832754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.833376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.833434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.833851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.834248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.834283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.834631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.835179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.835208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.835592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.836109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.836138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.836515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.837078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.837108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.837618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.838110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.838138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.838515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.839056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.839086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.839583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.840062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.840093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.840582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.841090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.841119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.841629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.842106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.951 [2024-06-08 21:27:11.842135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.951 qpair failed and we were unable to recover it. 00:31:33.951 [2024-06-08 21:27:11.842517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.842904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.842932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.843429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.843792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.843826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.844315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.844606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.844645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.845044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.845529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.845560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.846105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.846606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.846637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.847121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.847644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.847676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.848170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.848660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.848690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.849180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.849766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.849871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.850374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.851002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.851036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.851623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.852258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.852299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.852835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.853371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.853411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.853978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.854481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.854533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.855061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.855645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.855753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.856389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.856861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.856894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.857388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.857843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.857875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.858379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.858880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.858910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.859385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.859973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.860078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.860721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.861362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.861420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.861821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.862315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.862346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.862904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.863362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.863391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.863892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.864375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.864416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.864927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.865427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.865460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.865968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.866440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.866469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.866976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.867467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.867518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.868082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.868352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.952 [2024-06-08 21:27:11.868395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.952 qpair failed and we were unable to recover it. 00:31:33.952 [2024-06-08 21:27:11.868910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.869422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.869452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.869858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.870344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.870374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.870793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.871223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.871252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.871761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.872176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.872206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.872710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.873223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.873265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.873790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.874310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.874341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.874732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.875099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.875135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.875683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.876164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.876193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.876611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.877087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.877116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.877613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.877983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.878026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.878528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.878998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.879028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.879521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.880023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.880053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.880524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.881031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.881061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.881549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.882001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.882031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.882529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.883040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.883069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.883460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.883822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.883852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.884256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.884730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.884760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.885256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.885719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.885750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.886249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.886697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.886727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.887242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.887719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.887768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.888247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.888719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.888750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.889269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.889639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.889668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.890120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.890641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.890671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.891161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.891699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.891730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.892091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.892581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.953 [2024-06-08 21:27:11.892611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.953 qpair failed and we were unable to recover it. 00:31:33.953 [2024-06-08 21:27:11.893136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.893621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.893652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.894129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.894710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.894815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.895252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.895765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.895798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.896179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.896682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.896713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.897205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.897761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.897878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.898658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.899286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.899326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.899922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.900473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.900526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.901038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.901523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.901552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.902049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.902545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.902576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.903064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.903538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.903569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.904072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.904557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.904587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.904938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.905449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.905479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.905996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.906502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.906532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.906964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.907373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.907410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.907948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.908323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.908366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.908904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.909256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.909285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.909673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.910143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.910173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.910601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.911086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.911117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.911618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.912109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.912139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.954 qpair failed and we were unable to recover it. 00:31:33.954 [2024-06-08 21:27:11.912621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.913102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.954 [2024-06-08 21:27:11.913131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.913619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.914018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.914047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.914474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.915011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.915041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.915531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.916026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.916056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.916557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.916932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.916961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.917488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.918003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.918032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.918514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.919085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.919115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.919528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.920050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.920078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.920573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.921083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.921113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.921601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.921971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.922000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.922491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.922987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.923015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.923529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.924018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.924047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.924448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.924982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.925010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.925505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.926012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.926041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.926514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.926889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.926928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.927425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.927934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.927963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.928343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.928837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.928867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.929360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.929885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.929917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.930298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.930829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.930861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.931363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.931891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.931921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.932464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.933008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.933038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.933393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.933923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.933953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.934475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.935010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.935040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.935544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.936045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.936075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.936563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.937071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.937102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.955 qpair failed and we were unable to recover it. 00:31:33.955 [2024-06-08 21:27:11.937589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.938070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.955 [2024-06-08 21:27:11.938100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.938598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.939083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.939112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.939629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.940139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.940167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.940754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.941236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.941281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.941796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.942284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.942313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.942741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.943256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.943285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.943718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.944088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.944116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.944611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.945095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.945123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.945616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.945986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.946015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.946524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.947029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.947058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.947530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.947936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.947978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.948517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.949018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.949047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.949550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.950072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.950101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.950477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.950958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.950986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.951486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.951952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.951980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.952464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.952989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.953018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.953421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.953973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.954005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.954491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.954900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.954928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.955423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.955943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.955974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.956421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.956787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.956815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.957299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.957724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.957754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.958265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.958737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.958768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.959226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.959728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.959833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.960429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.960922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.960954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.961436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.961925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.961954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.956 qpair failed and we were unable to recover it. 00:31:33.956 [2024-06-08 21:27:11.962625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.963223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.956 [2024-06-08 21:27:11.963263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.963784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.964264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.964294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.964809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.965302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.965332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.965879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.966370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.966400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.966882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.967261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.967304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.967638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.968092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.968122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.968620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.969146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.969176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.969751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.970285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.970325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.970904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.971388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.971432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.971935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.972416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.972446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.972941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.973444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.973476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.973998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.974499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.974528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.975026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.975422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.975452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.975938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.976623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.976729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.977286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.977794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.977827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.978314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.978797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.978829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.979321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.979899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.980004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.980714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.981353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.981393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.981926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.982416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.982448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.982908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.983422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.983453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.983932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.984424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.984454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.984966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.985455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.985487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.985988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.986704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.986807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.987375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.988016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.988049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.988467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.988887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.988917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.957 [2024-06-08 21:27:11.989323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.989697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.957 [2024-06-08 21:27:11.989728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.957 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.990219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.990820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.990924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.991658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.992295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.992336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.992865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.993260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.993290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.993794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.994289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.994320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.994811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.995309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.995339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.995643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.996112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.996142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.996523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.997014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.997043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.997553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.998047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.998075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.998473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.999020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:11.999049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:11.999539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.000022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.000051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.000558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.000935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.000966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.001456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.002002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.002031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.002522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.003009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.003038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.003542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.003975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.004003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.004496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.005000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.005028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.005534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.006031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.006061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.006469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.006998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.007028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.007526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.007896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.007928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.008427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.008828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.008857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.009351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.009825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.009855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.010351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.010900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.010933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.011430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.011890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.011918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.012291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.012688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.012717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.013226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.013714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.013744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.958 [2024-06-08 21:27:12.014227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.014711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.958 [2024-06-08 21:27:12.014818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.958 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.015358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.015798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.015832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.016211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.016715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.016746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.017170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.017527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.017558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.018074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.018463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.018494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.018997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.019482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.019512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.020018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.020532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.020562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.021038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.021528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.021560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.959 qpair failed and we were unable to recover it. 00:31:33.959 [2024-06-08 21:27:12.022068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.022579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.959 [2024-06-08 21:27:12.022610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.023115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.023594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.023623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.024004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.024490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.024519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.025041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.025540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.025570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.025939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.026304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.026337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.026862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.027379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.027420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.027913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.028433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.028464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.029034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.029520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.029554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.030045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.030422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.030461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.030992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.031469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.031520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.031953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.032440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.032470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.033012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.033499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.033530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:33.960 [2024-06-08 21:27:12.034082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.034544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.960 [2024-06-08 21:27:12.034575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:33.960 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.035061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.035542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.035571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.035985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.036382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.036420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.036833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.037317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.037346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.037894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.038243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.038273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.038803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.039296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.039325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.039721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.040269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.040316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.040838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.041284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.041313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.227 qpair failed and we were unable to recover it. 00:31:34.227 [2024-06-08 21:27:12.041876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.227 [2024-06-08 21:27:12.042348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.042378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.042909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.043413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.043444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.043903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.044355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.044384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.045087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.045707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.045811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.046289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.046782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.046815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.047295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.047750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.047781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.048207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.048728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.048835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.049264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.049667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.049700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.050185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.050679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.050722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.051277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.051740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.051769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.052145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.052656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.052688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.053206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.053715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.053746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.054226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.054818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.054923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.055681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.056312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.056352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.057007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.057621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.057727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.058254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.058794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.058825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.059310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.059863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.059892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.060270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.060657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.060686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.061188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.061697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.061739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.062237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.062800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.062903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.063662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.064254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.064293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.064716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.065177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.065208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.065639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.066119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.066148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.228 [2024-06-08 21:27:12.066693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.067290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.228 [2024-06-08 21:27:12.067330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.228 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.067926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.068424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.068455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.068960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.069475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.069529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.070055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.070679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.070785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.071248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.071829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.071861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.072348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.072839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.072871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.073363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.073829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.073861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.074364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.074763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.074794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.075253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.075731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.075762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.076262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.076772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.076802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.077282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.077762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.077792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.078290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.078766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.078795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.079299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.079788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.079818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.080311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.080709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.080743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.081220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.081724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.081829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.082384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.082925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.082959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.083476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.083998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.084028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.084425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.084956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.084985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.085624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.086136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.086175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.086812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.087427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.087468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.088027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.088475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.088549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.089142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.089685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.089790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.090340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.090751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.090783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.091264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.091739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.091770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.229 qpair failed and we were unable to recover it. 00:31:34.229 [2024-06-08 21:27:12.092265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.229 [2024-06-08 21:27:12.092776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.092808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.093304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.093816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.093847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.094350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.094639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.094671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.095175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.095766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.095870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.096472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.096975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.097006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.097481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.097978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.098007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.098380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.098904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.098936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.099436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.099870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.099900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.100287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.100813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.100845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.101329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.101801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.101830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.102328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.102797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.102826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.103294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.103776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.103806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.104307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.104791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.104822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.105309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.105876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.105906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.106412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.106898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.106928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.107466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.107999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.108029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.108651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.109294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.109334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.109768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.110261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.110290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.110670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.111150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.111179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.111660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.112169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.112197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.112710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.113310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.113351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.113908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.114399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.114444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.114963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.115473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.115526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.116055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.116623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.116728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.230 qpair failed and we were unable to recover it. 00:31:34.230 [2024-06-08 21:27:12.117290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.230 [2024-06-08 21:27:12.117630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.117662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.118130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.118612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.118642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.119117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.119576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.119606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.119967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.120472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.120503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.120923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.121434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.121464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.121941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.122298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.122328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.122850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.123339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.123369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.124037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.124397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.124452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.124981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.125633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.125737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.126298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.126627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.126659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.127125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.127501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.127532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.128026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.128525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.128556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.128952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.129429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.129459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.129993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.130453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.130484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.130851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.131284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.131314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.131778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.132265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.132296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.132829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.133167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.133196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.133657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.134021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.134050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.134367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.134781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.134811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.135291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.135805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.135835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.136344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.136837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.136866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.137381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.137854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.137883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.138239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.138809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.138915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.139653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.140258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.140297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.231 qpair failed and we were unable to recover it. 00:31:34.231 [2024-06-08 21:27:12.140739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.141101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.231 [2024-06-08 21:27:12.141138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.141675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.142024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.142058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.142484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.142997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.143026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.143418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.143922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.143951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.144345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.144763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.144803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.145327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.145800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.145831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.146329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.146833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.146865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.147358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.147821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.147852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.148360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.148827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.148858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.149360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.149873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.149904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.150392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.150970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.150999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.151373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.151851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.151882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.152331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.152845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.152952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.153442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.153972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.154003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.154669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.155301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.155342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.155963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.156701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.156807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.157364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.157917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.157950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.158623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.159247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.159289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.159822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.160303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.160333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.160639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.161161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.161191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.161741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.162378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.162431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.162950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.163335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.163364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.163809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.164310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.232 [2024-06-08 21:27:12.164339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.232 qpair failed and we were unable to recover it. 00:31:34.232 [2024-06-08 21:27:12.164814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.165235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.165263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.165659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.166172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.166202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.166695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.167211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.167261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.167754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.168165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.168197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.168697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.169181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.169212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.169829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.170430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.170472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.171011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.171622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.171727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.172283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.172637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.172670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.173167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.173759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.173864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.174483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.175016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.175047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.175569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.176060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.176089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.176613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.177139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.177169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.177572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.178111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.178142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.178413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.178926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.178956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.179333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.179868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.179898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.180188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.180708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.180812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.181254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.181754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.181787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.182175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.182638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.182668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.233 [2024-06-08 21:27:12.183168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.183762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.233 [2024-06-08 21:27:12.183868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.233 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.184469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.185041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.185072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.185499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.186003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.186032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.186547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.187055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.187084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.187573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.188058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.188086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.188576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.189083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.189116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.189505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.189939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.189969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.190380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.190931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.190962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.191470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.192018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.192048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.192546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.193033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.193063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.193508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.193950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.193979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.194500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.194881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.194910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.195389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.195855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.195885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.196255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.196774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.196822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.197280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.197725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.197756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.198264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.198654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.198684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.199175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.200325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.200370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.200809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.201305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.201337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.201733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.202233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.202264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.202706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.203210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.203240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.203716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.204201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.204233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.204622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.205103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.205133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.205775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.206365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.206436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.206993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.207381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.207433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.207945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.208436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.208469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.234 [2024-06-08 21:27:12.208964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.209473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.234 [2024-06-08 21:27:12.209525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.234 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.210016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.210441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.210473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.210957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.211465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.211519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.212049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.212540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.212570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.213089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.213575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.213605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.214121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.214710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.214815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.215364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.215941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.215973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.216469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.216886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.216924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.217418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.217934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.217977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.218499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.219005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.219036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.219423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.219917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.219947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.220419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.220925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.220955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.221466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.222001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.222031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.222417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.222806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.222843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.223368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.223875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.223980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.224627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.225155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.225195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.225577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.226080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.226110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.226597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.227087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.227115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.227714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.228275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.228314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.228894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.229263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.229292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.229786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.230274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.230303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.230791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.231278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.231307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.231778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.232266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.232295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.232724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.233198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.233228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.235 qpair failed and we were unable to recover it. 00:31:34.235 [2024-06-08 21:27:12.233713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.235 [2024-06-08 21:27:12.234192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.234223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.234803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.235447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.235492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.236040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.236533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.236564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.237058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.237541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.237572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.238105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.238594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.238623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.239125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.239582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.239614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.239991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.240475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.240506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.240944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.241358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.241387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.241877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.242362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.242391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.242902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.243415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.243446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.243950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.244440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.244472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.244973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.245459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.245488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.245998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.246658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.246763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.247241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.247719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.247753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.248150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.248549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.248580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.248973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.249465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.249496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.249912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.250384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.250422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.250901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.251396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.251434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.251969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.252324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.252352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.252858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.253338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.253367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.253849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.254333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.254361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.254856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.255229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.255257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.255756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.256254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.256282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.256710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.257192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.236 [2024-06-08 21:27:12.257221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.236 qpair failed and we were unable to recover it. 00:31:34.236 [2024-06-08 21:27:12.257828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.258617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.258722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.259329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.259829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.259862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.260239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.260907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.261011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.261711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.262299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.262338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.262765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.263306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.263338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.263754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.264239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.264269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.264781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.265253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.265281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.265745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.266225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.266254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.266733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.267222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.267251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.267766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.268254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.268283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.268771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.269259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.269287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.269798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.270252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.270281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.270815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.271278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.271308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.271752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.272260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.272289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.272793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.273278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.273307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.273802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.274283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.274313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.274614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.275097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.275126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.275655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.276140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.276169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.276582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.277060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.277090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.277595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.277954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.277982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.278461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.279005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.279033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.279418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.279971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.280000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.237 [2024-06-08 21:27:12.280622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.281232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.237 [2024-06-08 21:27:12.281271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.237 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.281795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.282248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.282279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.282629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.283127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.283157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.283666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.284169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.284199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.284603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.285115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.285145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.285733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.286369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.286425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.286917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.287293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.287334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.287831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.288209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.288239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.288732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.289367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.289424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.290002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.290648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.290751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.291317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.291874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.291906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.292380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.292926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.292956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.293475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.294000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.294033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.294671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.295267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.295307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.295872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.296354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.296383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.296922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.297300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.297344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.297870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.298357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.298385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.298818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.299207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.299236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.299864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.300696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.300802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.301395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.301823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.301854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.302330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.302900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.238 [2024-06-08 21:27:12.303004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.238 qpair failed and we were unable to recover it. 00:31:34.238 [2024-06-08 21:27:12.303740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.304378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.304435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.305046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.305642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.305747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.306281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.306826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.306858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.307351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.307784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.307814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.308191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.308702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.308737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.309266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.309775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.309808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.310261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.310684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.310716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.311255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.311739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.311769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.312229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.312696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.239 [2024-06-08 21:27:12.312728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.239 qpair failed and we were unable to recover it. 00:31:34.239 [2024-06-08 21:27:12.313225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.313819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.313924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.503 qpair failed and we were unable to recover it. 00:31:34.503 [2024-06-08 21:27:12.314474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.314851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.314882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.503 qpair failed and we were unable to recover it. 00:31:34.503 [2024-06-08 21:27:12.315250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.315765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.315796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.503 qpair failed and we were unable to recover it. 00:31:34.503 [2024-06-08 21:27:12.316301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.316787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.503 [2024-06-08 21:27:12.316819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.503 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.317375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.317881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.317916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.318329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.318794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.318824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.319275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.319656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.319686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.320226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.320689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.320719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.321119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.321704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.321809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.322275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.322801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.322834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.323338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.323798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.323828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.324332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.324806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.324838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.325253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.325742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.325774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.326236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.326697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.326727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.327218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.327714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.327818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.328375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.328968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.329000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.329662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.330250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.330290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.330866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.331346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.331377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.331984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.332686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.332791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.333437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.333870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.333902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.334289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.334904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.335009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.335458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.336019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.336051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.336560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.337052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.337081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.337548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.338039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.338069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.338557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.339061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.339090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.339482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.340037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.340067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.340450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.340968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.340998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.341519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.341982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.342011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.504 qpair failed and we were unable to recover it. 00:31:34.504 [2024-06-08 21:27:12.342508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.343026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.504 [2024-06-08 21:27:12.343055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.343392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.343939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.343969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.344359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.344644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.344673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.345073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.345577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.345605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.346105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.346580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.346610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.347053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.347534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.347563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.348062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.348507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.348538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.349054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.349528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.349558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.349943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.350462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.350491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.351010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.351532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.351561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.352047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.352417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.352449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.352847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.353334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.353375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.353859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.354353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.354382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.354858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.355372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.355410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.355923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.356310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.356348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.356909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.357308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.357338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.357944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.358658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.358764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.359372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.359926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.359959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.360470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.360966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.360996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.361482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.361987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.362016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.362503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.363014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.363044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.363554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.364087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.364129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.364697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.365092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.365122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.365520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.366017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.366046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.366615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.367106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.367135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.505 qpair failed and we were unable to recover it. 00:31:34.505 [2024-06-08 21:27:12.367609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.505 [2024-06-08 21:27:12.368105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.368133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.368745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.369344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.369384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.369851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.370343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.370373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.370831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.371313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.371341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.371839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.372330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.372361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.372752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.373247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.373279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.373890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.374370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.374422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.374972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.375651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.375756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.376369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.376883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.376915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.377470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.378013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.378043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.378544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.379062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.379092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.379576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.380074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.380104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.380579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.380991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.381019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.381311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.381700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.381728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.382226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.382788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.382818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.383305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.383848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.383879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.384376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.384862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.384903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.385399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.385892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.385922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.386388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.387039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.387144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.387816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.388423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.388465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.388988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.389620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.389727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.390286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.390815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.390919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.391677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.392309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.392350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.392886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.393344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.393374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.393816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.394323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.506 [2024-06-08 21:27:12.394352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.506 qpair failed and we were unable to recover it. 00:31:34.506 [2024-06-08 21:27:12.394839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.395347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.395376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.395948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.396430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.396461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.396877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.397365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.397394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.397920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.398417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.398446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.398849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.399379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.399422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.399933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.400428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.400458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.400977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.401475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.401504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.401898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.402278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.402311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.402830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.403351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.403380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.403917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.404302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.404332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.404864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.405386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.405423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.405807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.406377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.406413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.406940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.407356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.407385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.407966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.408429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.408459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.408998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.409652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.409756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.410357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.410871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.410903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.411420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.411947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.411978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.412682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.413169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.413209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.413822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.414476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.414544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.415092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.415473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.415508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.416055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.416446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.416483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.416862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.417344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.417374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.417910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.418367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.418397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.418883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.419370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.419400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.507 qpair failed and we were unable to recover it. 00:31:34.507 [2024-06-08 21:27:12.419909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.507 [2024-06-08 21:27:12.420436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.420467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.420973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.421488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.421519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.422040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.422422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.422452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.422960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.423470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.423522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.424030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.424677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.424781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.425382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.425921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.425953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.426474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.426997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.427027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.427674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.428258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.428298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.428814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.429338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.429369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.429769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.430305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.430338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.430833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.431234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.431261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.431690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.432169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.432199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.432712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.433082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.433112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.433625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.434111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.434141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.434791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.435241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.435282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.435652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.436142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.436172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.436682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.437166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.437198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.437678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.438164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.438194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.438763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.439362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.439418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.439992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.440677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.440781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.441380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.441930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.441964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.442674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.443271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.443312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.443840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.444284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.444315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.444790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.445342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.445373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.445864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.446280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.446312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.508 qpair failed and we were unable to recover it. 00:31:34.508 [2024-06-08 21:27:12.446677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.508 [2024-06-08 21:27:12.447135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.447166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.447755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.448267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.448316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.448939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.449358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.449389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.449818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.450190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.450220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.450715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.451216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.451246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.451724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.452218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.452246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.452634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.453123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.453152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.453660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.454144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.454174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.454657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.455139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.455168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.455766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.456385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.456439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.457017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.457620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.457728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.458287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.458678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.458719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.459230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.459723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.459755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.460285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.460749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.460778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.461251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.461712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.461742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.462238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.462833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.462937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.463657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.464293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.464333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.464881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.465372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.465420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.465932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.466431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.466462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.509 [2024-06-08 21:27:12.466993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.467467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.509 [2024-06-08 21:27:12.467520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.509 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.468053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.468681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.468786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.469349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.469861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.469893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.470372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.470963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.471067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.471764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.472416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.472458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.472980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.473685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.473792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.474315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.474809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.474842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.475213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.475813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.475918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.476625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.477218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.477258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.477828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.478303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.478333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.478845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.479314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.479345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.479936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.480464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.480517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.481027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.481508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.481540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.482031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.482399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.482449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.482824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.483362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.483392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.483891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.484348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.484377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.484848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.485318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.485347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.485700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.486183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.486212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.486612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.487014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.487050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.487551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.488059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.488088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.488606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.489094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.489122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.489618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.490132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.490162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.490751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.491359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.491398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.491916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.492325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.492354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.492838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.493358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.493390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.510 [2024-06-08 21:27:12.493963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.494690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.510 [2024-06-08 21:27:12.494794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.510 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.495341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.495857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.495962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.496685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.497173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.497221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.497804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.498351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.498382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.498818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.499300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.499329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.499839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.500360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.500391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.500792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.501145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.501174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.501649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.502129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.502157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.502823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.503456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.503499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.504017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.504507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.504561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.505051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.505541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.505571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.506062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.506563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.506593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.507006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.507508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.507542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.508043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.508449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.508480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.508907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.509448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.509480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.509975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.510474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.510503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.511008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.511285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.511316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.511705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.512083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.512112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.512604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.513071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.513100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.513613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.514106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.514135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.514511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.514988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.515016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.515513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.516002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.516031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.516498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.517011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.517040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.517574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.518062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.518092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.518631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.519000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.519032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.511 qpair failed and we were unable to recover it. 00:31:34.511 [2024-06-08 21:27:12.519445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.519943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.511 [2024-06-08 21:27:12.519972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.520462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.520985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.521013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.521455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.521903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.521932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.522441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.522941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.522969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.523462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.523847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.523886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.524317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.524813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.524845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.525332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.525808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.525838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.526331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.526716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.526755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.527284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.527752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.527783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.528293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.528664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.528696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.529184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.529551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.529582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.530084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.530469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.530504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.531014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.531383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.531456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.531993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.532358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.532387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.532912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.533279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.533315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.533803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.534176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.534205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.534782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.535433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.535474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.536015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.536681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.536785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.537302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.537713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.537746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.538212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.538866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.538971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.539658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.540247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.540288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.540843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.541315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.541346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.541662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.542140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.542168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.542682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.543153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.543183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.543767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.544400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.544486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.512 qpair failed and we were unable to recover it. 00:31:34.512 [2024-06-08 21:27:12.545008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.512 [2024-06-08 21:27:12.545677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.545781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.546362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.546789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.546822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.547211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.547698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.547802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.548356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.548953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.548985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.549391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.549912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.549941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.550320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.550896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.550999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.551437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.551943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.551974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.552376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.553030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.553135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.553735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.554327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.554367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.554969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.555673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.555790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.556342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.557022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.557125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.557779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.558368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.558427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.558938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.559470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.559525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.560092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.560710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.560816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.561378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.561716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.561748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.562125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.562599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.562630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.563129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.563691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.563796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.564286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.564603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.564637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.565132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.565563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.565594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.566065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.566454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.566498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.566846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.567379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.567425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.567938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.568442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.568475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.568963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.569465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.569496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.570010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.570508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.570538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.571049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.571442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.571479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.513 qpair failed and we were unable to recover it. 00:31:34.513 [2024-06-08 21:27:12.571990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.513 [2024-06-08 21:27:12.572469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.572500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.572982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.573470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.573501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.573863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.574222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.574254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.574766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.575250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.575282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.575692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.576185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.576217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.576721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.577212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.577241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.577619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.578098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.578127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.578628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.579108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.579137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.579614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.580104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.580132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.580642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.581124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.581152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.581702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.582088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.582117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.582467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.582965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.582994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.583502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.584009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.584037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.584551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.585038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.585067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.585562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.586056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.586085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.586593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.587074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.587104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.587608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.588089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.588119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.588626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.589088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.589118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.589611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.590094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.590125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.590619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.591123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.514 [2024-06-08 21:27:12.591153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.514 qpair failed and we were unable to recover it. 00:31:34.514 [2024-06-08 21:27:12.591751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.592380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.592458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.592992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.593607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.593714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.594147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.594655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.594690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.595180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.595542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.595578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.596067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.596647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.596753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.597367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.597792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.597826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.598310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.598827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.598858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.599350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.599827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.599858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.600317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.600672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.600703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.601172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.601750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.601855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.602398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.602929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.602960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.603476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.604030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.604059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.604561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.605048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-06-08 21:27:12.605078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-06-08 21:27:12.605572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.606057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.606086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.606578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.607058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.607089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.607593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.608078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.608107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.608494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.608925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.608955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.609477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.609847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.609876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.610361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.610819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.610850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.611342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.611880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.611912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.612433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.612926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.612956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.613631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.614255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.614296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.614671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.615205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.615235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.615605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.616092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.616121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.616619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.617103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.617133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.617637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.618126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.618156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.618652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.619135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.619164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.619764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.620348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.620389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.620972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.621472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.621528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.622034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.622520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.622551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.623038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.623524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.623555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.624021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.624416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.624445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.624729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.625124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.625157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.625654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.626140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.626170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.626731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.627225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.627254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.627737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.628218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.628247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.628755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.629235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.629265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.629735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.630157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-06-08 21:27:12.630186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-06-08 21:27:12.630669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.631149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.631179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.631559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.632047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.632076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.632472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.632964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.632992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.633433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.633945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.633973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.634632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.635259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.635298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.635655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.636157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.636187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.636684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.637171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.637200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.637799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.638435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.638476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.639025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.639604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.639709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.640312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.640839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.640871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.641368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.641967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.642071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.642726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.643320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.643360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.643868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.644239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.644281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.644777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.645256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.645285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.645776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.646257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.646286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.646720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.647210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.647238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.647739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.648220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.648250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.648754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.649276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.649305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.649803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.650288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.650319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.650815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.651299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.651329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.651876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.652370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.652398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.652831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.653346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.653378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.653828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.654312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.654341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.654964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.655549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.655591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.656115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.656701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.783 [2024-06-08 21:27:12.656805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.783 qpair failed and we were unable to recover it. 00:31:34.783 [2024-06-08 21:27:12.657398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.657921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.657953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.658477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.658879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.658921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.659429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.659956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.659987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.660358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.660812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.660843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.661303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.661791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.661821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.662320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.662814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.662845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.663342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.663909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.664015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.664669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.665255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.665295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.665813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.666293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.666323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.666786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.667267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.667297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.667758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.668249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.668280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.668781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.669264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.669294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.669800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.670285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.670317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.670780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.671162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.671192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.671682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.672168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.672197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.672776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.673360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.673419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.673944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.674213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.674246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.674740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.675125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.675153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.675641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.676125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.676154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.676650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.677129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.677158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.677751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.678378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.678449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.679003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.679609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.679715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.680312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.680775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.680808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.681295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.681783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.681812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.682321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.682813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.784 [2024-06-08 21:27:12.682843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.784 qpair failed and we were unable to recover it. 00:31:34.784 [2024-06-08 21:27:12.683342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.683824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.683855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.684231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.684634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.684677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.685129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.685630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.685660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.686151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.686632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.686662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.687157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.687639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.687670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.688160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.688743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.688849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.689435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.689973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.690004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.690605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.691233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.691285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.691715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.692197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.692227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.692819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.693478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.693545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.694077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.694558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.694590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.694967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.695370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.695418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.695912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.696393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.696433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.696922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.697425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.697455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.697939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.698445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.698477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.698960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.699442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.699472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.699980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.700595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.700699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.701242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.701804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.701849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.702330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.702816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.702848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.703348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.703805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.703838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.704330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.704814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.704844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.705341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.705975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.706079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.706718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.707300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.707341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.707852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.708336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.708366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.785 qpair failed and we were unable to recover it. 00:31:34.785 [2024-06-08 21:27:12.708861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.785 [2024-06-08 21:27:12.709342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.709371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.709860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.710345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.710374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.710892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.711372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.711413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.711888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.712373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.712425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.712949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.713429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.713460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.713964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.714451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.714482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.714899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.715379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.715419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.715921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.716420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.716451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.716938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.717428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.717459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.717859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.718338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.718366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.718857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.719341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.719370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.719867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.720236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.720264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.720730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.721214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.721242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.721717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.722201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.722229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.722718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.723200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.723230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.723834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.724477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.724544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.725069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.725552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.725583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.726097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.726455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.726485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.726993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.727489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.727518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.728022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.728504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.728534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.786 qpair failed and we were unable to recover it. 00:31:34.786 [2024-06-08 21:27:12.728998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.786 [2024-06-08 21:27:12.729486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.729517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.729909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.730445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.730478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.730858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.731389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.731434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.731963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.732444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.732474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.732985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.733499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.733530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.734040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.734505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.734535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.735032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.735511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.735541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.736042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.736562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.736592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.737088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.737568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.737599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.738016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.738487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.738517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.739087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.739573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.739603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.740092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.740580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.740610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.741127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.741607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.741638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.742148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.742726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.742832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.743396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.743971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.744001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.744598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.745227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.745266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.745795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.746277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.746308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.746759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.747275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.747304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.747791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.748269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.748297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.748794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.749273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.749303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.749767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.750250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.750279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.750536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.751039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.751068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.751598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.752067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.752097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.752596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.753083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.753112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.753682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.754172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.754201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.787 qpair failed and we were unable to recover it. 00:31:34.787 [2024-06-08 21:27:12.754708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.787 [2024-06-08 21:27:12.755198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.755227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.755725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.756361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.756419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.756981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.757650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.757754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.758354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.758882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.758914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.759413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.759911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.759939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.760470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.760899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.760941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.761443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.761963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.761993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.762453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.762869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.762898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.763393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.763909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.763938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.764443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.764933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.764962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.765459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.765925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.765954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.766453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.766939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.766969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.767444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.767928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.767956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.768441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.768923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.768953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.769370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.769936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.769966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.770457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.770942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.770970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.771471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.771855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.771884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.772395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.772904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.772933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.773434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.773926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.773956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.774440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.774919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.774950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.775450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.775928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.775957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.776455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.776939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.776968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.777468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.777959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.777988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.778489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.778859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.778888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.779350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.779730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.788 [2024-06-08 21:27:12.779773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.788 qpair failed and we were unable to recover it. 00:31:34.788 [2024-06-08 21:27:12.780283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.780766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.780797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.781295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.781667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.781701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.782189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.782674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.782705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.783201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.783688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.783719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.784206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.784788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.784896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.785605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.786237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.786278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.786799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.787302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.787332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.787793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.788276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.788306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.788710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.789189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.789217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.789731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.790214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.790243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.790741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.791220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.791249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.791748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.792231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.792260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.792656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.793147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.793176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.793654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.794137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.794166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.794665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.795146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.795175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.795767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.796395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.796453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.796997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.797624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.797729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.798282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.798808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.798841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.799340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.799826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.799857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.800356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.800927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.801032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.801679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.802306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.802348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.802939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.803431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.803463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.803994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.804650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.804754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.805345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.805655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.805706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.806193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.806813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.789 [2024-06-08 21:27:12.806919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.789 qpair failed and we were unable to recover it. 00:31:34.789 [2024-06-08 21:27:12.807399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.807956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.807993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.808442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2583207 Killed "${NVMF_APP[@]}" "$@" 00:31:34.790 [2024-06-08 21:27:12.808817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.808851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.809365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 21:27:12 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:34.790 21:27:12 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:34.790 [2024-06-08 21:27:12.809902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.809933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 21:27:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:34.790 21:27:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:34.790 [2024-06-08 21:27:12.810437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 [2024-06-08 21:27:12.810950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.810981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.811638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.812148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.812197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.812716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.813089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.813120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.813484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.813917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.813946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.814425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.814918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.814948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.815651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.816252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.816293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.816892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.817295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.817326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.817803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 21:27:12 -- nvmf/common.sh@469 -- # nvmfpid=2584707 00:31:34.790 21:27:12 -- nvmf/common.sh@470 -- # waitforlisten 2584707 00:31:34.790 [2024-06-08 21:27:12.818285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.818316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 21:27:12 -- common/autotest_common.sh@819 -- # '[' -z 2584707 ']' 00:31:34.790 21:27:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:34.790 [2024-06-08 21:27:12.818782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 21:27:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.790 21:27:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:34.790 [2024-06-08 21:27:12.819302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.819333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 21:27:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.790 21:27:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:34.790 [2024-06-08 21:27:12.819806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 [2024-06-08 21:27:12.820291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.820321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.820790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.821283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.821315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.821793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.822217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.822247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.822624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.823102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.823133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.823638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.824110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.824139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.824642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.825003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.825032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.825540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.825919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.825950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.826439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.826864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.826894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.827383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.827886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.827918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.790 qpair failed and we were unable to recover it. 00:31:34.790 [2024-06-08 21:27:12.828428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.828839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.790 [2024-06-08 21:27:12.828869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.829386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.829919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.829949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.830435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.830807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.830837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.831257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.831717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.831748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.832252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.832760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.832792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.833288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.833757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.833787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.834265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.834640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.834687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.835086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.835522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.835557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.836068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.836556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.836586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.837082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.837568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.837599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.837942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.838428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.838460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.838956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.839319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.839348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.839800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.840316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.840344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.840737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.841218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.841248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.841613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.842101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.842131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.842620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.843072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.843102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.843478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.843994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.844023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.844531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.845020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.845049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.845383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.845899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.845929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.846433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.846895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.846925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.847423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.847906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.847936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.848443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.848941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.848970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.849671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.850289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.850330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.791 qpair failed and we were unable to recover it. 00:31:34.791 [2024-06-08 21:27:12.850566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.791 [2024-06-08 21:27:12.851093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.851124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.851424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.851957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.851987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.852392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.852930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.852961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.853327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.853877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.853981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.854709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.855229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.855279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.855675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.856163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.856193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.856579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.857073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.857102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.857596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.858085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.858114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.858483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.859004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.859033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.859551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.859911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.859940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.860441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.860932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.860961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.861462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.861952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.861988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.862472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.863009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.863038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.863511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.863876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.863913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.864429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.864933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.864963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.865458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.865996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.866027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:34.792 [2024-06-08 21:27:12.866488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.867013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.792 [2024-06-08 21:27:12.867044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:34.792 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.867554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.868051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.868080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.868499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.869044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.869075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.869578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.870076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.870105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.870155] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:35.059 [2024-06-08 21:27:12.870211] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.059 [2024-06-08 21:27:12.870622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.871090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.871119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.871640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.872148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.872186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.872782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.873438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.873481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.059 qpair failed and we were unable to recover it. 00:31:35.059 [2024-06-08 21:27:12.873926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.059 [2024-06-08 21:27:12.874324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.874359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.874793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.875275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.875306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.875727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.876220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.876251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.876589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.877093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.877124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.877618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.878112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.878143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.878514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.879001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.879030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.879538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.879901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.879932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.880423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.880827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.880860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.881337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.881833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.881875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.882264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.882769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.882800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.883078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.883611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.883641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.884025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.884271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.884299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.884703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.885115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.885144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.885651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.886157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.886185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.886704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.887185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.887214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.887656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.888072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.888112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.888646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.888921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.888954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.889446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.889826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.889857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.890231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.890752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.890783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.891289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.891792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.891823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.892345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.892839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.892869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.893366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.893847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.893877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.894384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.894746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.894778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.895308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.895895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.896001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.896262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.896578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.060 [2024-06-08 21:27:12.896611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.060 qpair failed and we were unable to recover it. 00:31:35.060 [2024-06-08 21:27:12.897124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.897610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.897640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.898106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.898630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.898661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.899134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.899621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.899652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.900144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.900627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.900658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.901166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.901740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.901846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.902216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.902714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.902746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.903215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.903645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.903749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.904243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.904749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.904781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.905285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.061 [2024-06-08 21:27:12.905778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.905808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.906183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.906713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.906746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.907245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.907597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.907633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.908117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.908569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.908598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.909074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.909469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.909501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.910012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.910377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.910418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.910958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.911471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.911527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.912030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.912389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.912432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.912913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.913417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.913446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.913933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.914446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.914477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.914989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.915476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.915507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.915943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.916305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.916336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.916861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.917250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.917279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.917668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.918044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.918073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.918359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.918893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.918924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.919429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.919922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.919951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.920388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.921010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.061 [2024-06-08 21:27:12.921115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.061 qpair failed and we were unable to recover it. 00:31:35.061 [2024-06-08 21:27:12.921615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.922032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.922069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.922494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.922977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.923007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.923425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.923938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.923967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.924372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.924892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.924923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.925473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.925999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.926029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.926526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.927027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.927058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.927551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.927962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.927991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.928477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.928977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.929006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.929419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.929898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.929926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.930438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.930933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.930965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.931646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.932286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.932326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.932861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.933349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.933379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.933887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.934382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.934424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.934913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.935383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.935426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.935958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.936444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.936476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.936996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.937609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.937713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.938115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.938595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.938627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.939003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.939572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.939606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.940096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.940594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.940624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.941001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.941331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.941365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.941900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.942260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.942293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.942789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.943279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.943309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.943815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.944299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.944330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.944842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.945327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.945356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.062 qpair failed and we were unable to recover it. 00:31:35.062 [2024-06-08 21:27:12.945883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.062 [2024-06-08 21:27:12.946369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.946399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.946877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.947414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.947446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.947937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.948446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.948477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.948821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.949286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.949315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.949901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.950608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.950714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.951262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.951757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.951790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.952275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.952774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.952806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.953314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.953816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.953848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.954357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.954736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.954767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.955290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.955805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.955835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.956334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.956830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.956861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.957385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.957920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.957950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.958622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.959163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.959203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.959580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.063 [2024-06-08 21:27:12.959754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.960223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.960267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.960798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.961290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.961321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.961795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.962327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.962357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.962888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.963354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.963383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.963899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.964253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.964283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.964571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.965109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.965138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.965446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.965889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.965921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.966422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.966924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.966953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.967335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.967710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.967740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.968238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.968824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.968928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.969419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.969649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.969682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.970183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.970782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.970887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.063 qpair failed and we were unable to recover it. 00:31:35.063 [2024-06-08 21:27:12.971389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.063 [2024-06-08 21:27:12.971912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.971945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.972454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.972952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.972984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.973618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.974186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.974226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.974751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.975032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.975064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.975561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.976057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.976087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.976589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.976973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.977016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.977531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.977936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.977966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.978467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.978964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.978994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.979511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.980001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.980031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.980528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.981028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.981059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.981566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.982050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.982082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.982579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.983066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.983095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.983595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.984044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.984074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.984547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.985047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.985076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.985534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.986047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.986076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.986534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.986898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.986930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.987428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.987700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.987728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.988091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.988594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.988626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.989087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.989444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.989475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.990004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.990418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.990449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.990988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.991477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.064 [2024-06-08 21:27:12.991508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.064 qpair failed and we were unable to recover it. 00:31:35.064 [2024-06-08 21:27:12.991973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.992462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.992492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.992989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.993488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.993518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.993997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.994382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.994422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.994923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.995417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.995447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.995900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.996389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.996442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.996861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.997336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.997364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.997887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.998379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.998420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.998889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.999376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:12.999417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:12.999910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.000448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.000481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.001008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.001492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.001522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.002016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.002623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.002726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.003334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.003749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.003781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.004260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.004770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.004876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.005659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.006298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.006339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.006922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.007384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.007425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.007970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.008698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.008803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.009360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.009883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.009915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.010423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.010805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.010835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.011201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.011599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.011647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.012065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.012594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.012628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.013117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.013606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.013637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.014021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.014537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.014568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.015071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.015557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.015587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.065 qpair failed and we were unable to recover it. 00:31:35.065 [2024-06-08 21:27:13.016091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.065 [2024-06-08 21:27:13.016580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.016612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.017102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.017469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.017506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.017966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.018448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.018479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.018968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.019444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.019474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.019842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.020304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.020333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.020842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.021317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.021345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.021857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.022347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.022376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.022874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.023365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.023394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.023902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.024429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.024460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.024976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.025456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.025486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.025993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.026600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.026705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.027262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.027644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.027679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.028136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.028500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.028535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.029016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.029499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.029530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.030033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.030522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.030552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.031058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.031549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.031580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.032090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.032620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.032652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.033039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.033425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.033456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.033899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.034229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.034258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.034742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.035113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.035155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.035663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.036156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.036185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.036564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.037006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.037034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.037539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.038027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.038056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.038560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.039028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.039057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.039562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.039918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.066 [2024-06-08 21:27:13.039953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.066 qpair failed and we were unable to recover it. 00:31:35.066 [2024-06-08 21:27:13.040470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.040926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.040954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.041378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.041894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.041925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.042431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.042965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.042993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.043528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.044018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.044046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.044469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.044987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.045015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.045527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.046012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.046040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.046551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.047046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.047074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.047575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.048061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.048092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.048584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.049081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.049111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.049644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.050131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.050160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.050670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.051159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.051188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.051786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.052024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:35.067 [2024-06-08 21:27:13.052163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.067 [2024-06-08 21:27:13.052175] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.067 [2024-06-08 21:27:13.052183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.067 [2024-06-08 21:27:13.052444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.052489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.052457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:35.067 [2024-06-08 21:27:13.052636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:35.067 [2024-06-08 21:27:13.052808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:35.067 [2024-06-08 21:27:13.052808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:35.067 [2024-06-08 21:27:13.053037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.053440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.053472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.053822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.054300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.054329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.054803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.055197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.055236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.055668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.056189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.056219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.056604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.057094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.057123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.057546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.057968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.057998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.058485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.058928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.058959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.059445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.059920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.059950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.060443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.060939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.060969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.061316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.061837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.061868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.067 qpair failed and we were unable to recover it. 00:31:35.067 [2024-06-08 21:27:13.062372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.062880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.067 [2024-06-08 21:27:13.062912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.063425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.063918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.063946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.064366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.064872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.064903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.065422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.065894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.065923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.066433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.066808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.066855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.067209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.067681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.067710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.068197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.068480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.068534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.068971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.069470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.069502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.069974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.070459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.070491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.070998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.071492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.071522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.072028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.072498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.072528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.073044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.073534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.073563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.074062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.074551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.074581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.074961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.075502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.075535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.075880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.076353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.076382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.076826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.077346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.077375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.077788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.078102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.078131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.078627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.079112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.079142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.079638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.080123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.080152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.080668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.081161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.081190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.081786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.082373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.082437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.083006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.083623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.083725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.084147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.084670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.084704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.085078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.085657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.085759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.086302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.086630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.086662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.068 [2024-06-08 21:27:13.087196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.087639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.068 [2024-06-08 21:27:13.087670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.068 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.088200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.088785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.088887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.089420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.089912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.089963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.090231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.090730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.090832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.091204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.091463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.091495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.091975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.092464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.092495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.093025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.093533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.093562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.093847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.094315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.094344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.094854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.095117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.095144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.095426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.095912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.095941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.096376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.096923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.096953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.097428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.097789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.097818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.098195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.098701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.098814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.099395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.099877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.099908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.100424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.100912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.100940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.101468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.102018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.102048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.102674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.103080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.103123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.103548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.104088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.104118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.104626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.104887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.104915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.105188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.105690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.105722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.105999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.106519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.106550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.107041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.107541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.107570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.108069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.108477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.108518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.108767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.109232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.109261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.109732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.110056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.110085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.069 [2024-06-08 21:27:13.110348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.110879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.069 [2024-06-08 21:27:13.110911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.069 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.111425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.111790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.111818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.112322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.112688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.112718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.113179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.113661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.113691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.114178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.114752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.114853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.115448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.115757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.115787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.116281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.116765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.116795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.117297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.117794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.117824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.118314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.118802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.118832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.119342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.119799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.119829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.120332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.120870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.120901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.121395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.121900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.121930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.122422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.122805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.122833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.123324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.123900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.124000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.124690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.125317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.125356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.125792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.126320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.126349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.126734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.127098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.127128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.127638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.128117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.128146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.128678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.129170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.129200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.129812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.130192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.130231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.130759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.131248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.131277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.070 [2024-06-08 21:27:13.131778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.132259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.070 [2024-06-08 21:27:13.132288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.070 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.132572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.133091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.133121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.133591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.133986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.134016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.134493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.135007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.135035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.135537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.136035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.136064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.136558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.137051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.137081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.137423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.137696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.137724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.138211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.138705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.138734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.139067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.139550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.139579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.140058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.140541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.140570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.141071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.141558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.141587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.141989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.142420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.142449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.142875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.143380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.143421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.143768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.144257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.144284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.071 [2024-06-08 21:27:13.144757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.145283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.071 [2024-06-08 21:27:13.145311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.071 qpair failed and we were unable to recover it. 00:31:35.335 [2024-06-08 21:27:13.145593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.335 [2024-06-08 21:27:13.146120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.335 [2024-06-08 21:27:13.146148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.335 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.146609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.147086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.147115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.147618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.148099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.148128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.148669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.149159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.149189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.149778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.150366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.150425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.150919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.151281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.151311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.151826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.152312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.152343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.152809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.153057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.153085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.153560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.154044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.154072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.154590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.154784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.154815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.154953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.155328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.155356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.155678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.156200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.156228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.156713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.157204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.157233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.157731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.158216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.158245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.158748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.159191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.159221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.159733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.160221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.160250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.160753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.161281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.161310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.161689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.162141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.162169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.162687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.163174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.163201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.163802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.164187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.164226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.164757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.165252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.165281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.165665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.166153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.166182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.166563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.166814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.166845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.167225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.167713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.167742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.168175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.168667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.168696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.336 qpair failed and we were unable to recover it. 00:31:35.336 [2024-06-08 21:27:13.169206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.336 [2024-06-08 21:27:13.169790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.169892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.170604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.171006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.171046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.171626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.172126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.172156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.172649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.173140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.173169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.173676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.174164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.174193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.174789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.175434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.175477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.175859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.176348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.176378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.176823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.177188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.177220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.177780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.178434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.178476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.178815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.179120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.179150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.179548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.180040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.180070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.180686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.181325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.181365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.181893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.182385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.182429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.182698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.183194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.183222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.183786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.184182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.184222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.184637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.185181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.185213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.185712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.186208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.186238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.186720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.187251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.187280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.187762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.188266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.188296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.188793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.189286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.189316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.189785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.190240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.190270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.190752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.191255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.191284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.191762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.192131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.192160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.192646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.193013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.193042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.337 qpair failed and we were unable to recover it. 00:31:35.337 [2024-06-08 21:27:13.193503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.194009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.337 [2024-06-08 21:27:13.194039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.194538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.194998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.195027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.195529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.196053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.196081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.196577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.197072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.197103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.197495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.198002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.198031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.198531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.199026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.199054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.199563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.200054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.200083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.200567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.200956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.200985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.201484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.201863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.201892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.202397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.202784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.202824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.203113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.203399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.203445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.203733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.204213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.204242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.204615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.205118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.205146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.205517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.206027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.206057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.206535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.207027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.207056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.207420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.207692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.207720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.208225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.208815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.208919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.209281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.209771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.209803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.210310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.210706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.210737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.211265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.211750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.211779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.212123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.212521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.212553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.213052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.213577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.213610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.214106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.214640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.214670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.215161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.215656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.215686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.216188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.216782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.216887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.338 qpair failed and we were unable to recover it. 00:31:35.338 [2024-06-08 21:27:13.217609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.218247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.338 [2024-06-08 21:27:13.218287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.218821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.219313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.219342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.219730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.220228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.220257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.220766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.221255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.221286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.221567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.222081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.222111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.222386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.222954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.222986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.223475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.223726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.223756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.224225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.224473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.224503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.225000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.225534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.225564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.226065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.226432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.226464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.226905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.227369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.227400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.227644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.228123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.228154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.228660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.229151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.229180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.229695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.230186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.230214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.230808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.231477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.231544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.231904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.232436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.232491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.233006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.233613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.233716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.234321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.234848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.234880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.235377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.235888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.235939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.236447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.236979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.237009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.237462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.237716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.237744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.238148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.238670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.238700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.239255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.239631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.239661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.240155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.240655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.240686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.241188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.241680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.241712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.339 qpair failed and we were unable to recover it. 00:31:35.339 [2024-06-08 21:27:13.242222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.242795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.339 [2024-06-08 21:27:13.242900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.243611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.244043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.244084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.244471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.245010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.245040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.245546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.246041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.246082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.246593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.247084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.247112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.247434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.247923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.247951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.248463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.248846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.248875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.249361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.249826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.249857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.250199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.250694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.250723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.251064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.251332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.251361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.251858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.252351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.252380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.252903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.253169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.253197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.253693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.253961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.253988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.254472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.254839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.254875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.255371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.255742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.255773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.256298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.256786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.256816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.257325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.257580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.257609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.258133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.258628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.258659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.259158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.259652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.259682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.259944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.260467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.260497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.260774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.261172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.261200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.261678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.262169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.262197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.262703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.263067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.263095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.263620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.264109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.264143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.264643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.265133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.265161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.340 qpair failed and we were unable to recover it. 00:31:35.340 [2024-06-08 21:27:13.265764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.266368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.340 [2024-06-08 21:27:13.266427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.266926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.267076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.267103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.267359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.267654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.267684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.267874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.268246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.268278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.268537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.268982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.269012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.269516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.270005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.270034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.270548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.271040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.271069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.271441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.271967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.271996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.272484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.273000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.273029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.273533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.274026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.274056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.274565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.275075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.275105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.275535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.276017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.276046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.276539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.277025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.277053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.277559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.277953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.277982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.278468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.278861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.278889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.279392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.279966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.279995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.280494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.280645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.280677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.281057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.281558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.281588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.282096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.282591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.282620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.283154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.283673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.283702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.341 [2024-06-08 21:27:13.284195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.284798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.341 [2024-06-08 21:27:13.284903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.341 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.285512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.286019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.286049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.286553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.287047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.287077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.287586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.288083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.288113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.288556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.288955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.288985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.289480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.289996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.290026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.290531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.291057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.291087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.291599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.292085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.292113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.292457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.292738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.292767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.293288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.293539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.293567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.293945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.294259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.294289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.294778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.295271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.295299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.295798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.296304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.296332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.296840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.297237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.297266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.297763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.298264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.298294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.298464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.298987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.299016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.299506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.299961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.299989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.300364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.300835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.300865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.301374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.301836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.301866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.302363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.302638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.302667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.302963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.303490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.303519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.303863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.304355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.304384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.304909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.305436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.305465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.305984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.306496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.306525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.307033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.307524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.307554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.342 [2024-06-08 21:27:13.307962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.308265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.342 [2024-06-08 21:27:13.308296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.342 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.308779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.309192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.309221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.309756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.310009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.310036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.310521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.310964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.310993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.311501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.312014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.312043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.312561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.312828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.312856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.313355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.313818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.313847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.314348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.314810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.314840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.315345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.315803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.315833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.316351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.316818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.316849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.317360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.317733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.317763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.317919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.318428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.318457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.318692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.319200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.319229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.319637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.320171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.320200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.320683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.321173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.321203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.321697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.322197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.322226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.322840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.323441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.323484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.324038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.324625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.324730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.325342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.325929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.326035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.326691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.326939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.326977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.327497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.327873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.327904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.328317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.328817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.328850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.329245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.329490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.329520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.330008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.330540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.330571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.330950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.331495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.331528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.343 [2024-06-08 21:27:13.332017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.332553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.343 [2024-06-08 21:27:13.332583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.343 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.333097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.333602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.333633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.334131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.334617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.334647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.334929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.335432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.335464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.335974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.336475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.336527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.337048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.337542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.337572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.338075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.338566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.338595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.339098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.339577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.339608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.339977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.340345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.340379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.340781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.341307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.341339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.341825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.342072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.342100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.342602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.342849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.342877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.343371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.343913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.343943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.344454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.344829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.344859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.345246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.345635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.345664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.346159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.346654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.346683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.347195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.347686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.347717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.348221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.348812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.348919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.349659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.350218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.350257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.350772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.351232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.351263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.351767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.352260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.352289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.352767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.353013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.353040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.353312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.353794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.353826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.354330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.354595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.354624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.355126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.355454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.355483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.344 [2024-06-08 21:27:13.355997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.356367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.344 [2024-06-08 21:27:13.356397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.344 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.356914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.357416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.357446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.357813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.358307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.358336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.358842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.359344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.359374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.359881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.360373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.360423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.360970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.361648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.361752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.362362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.362894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.362927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.363429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.363924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.363953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.364482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.365040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.365069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.365688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.366324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.366364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.366674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.367166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.367197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.367774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.368386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.368454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.368995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.369602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.369706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.370160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.370356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.370392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.370915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.371477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.371535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.371860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.372206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.372234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.372739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.372890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.372917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.373434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.373930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.373959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.374467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.374963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.374992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.375456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.375974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.376003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.376518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.376769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.376796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.377172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.377710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.377741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.378239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.378715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.378745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.379254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.379523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.379551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.380037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.380572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.380605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.345 [2024-06-08 21:27:13.381104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.381596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.345 [2024-06-08 21:27:13.381627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.345 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.382131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.382629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.382658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.382948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.383472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.383502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.383876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.384185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.384215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.384565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.385097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.385127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.385676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.386171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.386200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.386548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.387049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.387078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.387581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.388011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.388040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.388523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.389018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.389047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.389328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.389806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.389845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.390350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.390843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.390875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.391379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.391910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.391941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.392448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.392968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.392999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.393508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.393779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.393806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.394304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.394791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.394822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.395319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.395815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.395845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.396340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.396830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.396861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.397367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.397754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.397784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.398273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.398764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.398795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.399252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.399623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.399660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.400150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.400398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.400450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.400948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.401472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.401525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.346 qpair failed and we were unable to recover it. 00:31:35.346 [2024-06-08 21:27:13.402005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.346 [2024-06-08 21:27:13.402469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.402499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.402784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.403309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.403337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.403822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.404338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.404366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.404636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.405119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.405148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.405543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.406019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.406047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.406528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.407019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.407048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.407612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.408108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.408138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.408645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.409133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.409172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.409542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.409962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.409991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.410511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.410994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.411023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.411364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.411735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.411764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.412271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.412547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.412578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.413081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.413571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.413602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.414101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.414598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.414627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.414889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.415394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.415433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.415719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.416197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.416225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.416780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.417287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.417316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.417580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.417861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.417898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.418435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.418843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.418885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.419380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.419883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.419912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.420433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.420831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.420861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.421362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.421833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.421865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.422356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.422825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.422856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.347 qpair failed and we were unable to recover it. 00:31:35.347 [2024-06-08 21:27:13.423364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.347 [2024-06-08 21:27:13.423523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.423552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.424064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.424432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.424462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.424997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.425481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.425510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.425863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.426351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.426380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.426878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.427370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.427398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.427916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.428164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.428192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.428447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.428851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.428880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.429416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.429903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.429933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.430468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.431021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.431051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.431553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.432046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.432075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.432583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.433112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.433140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.433623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.434117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.434145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.434751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.435385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.435443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.435905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.436439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.436472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.436854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.437348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.437377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.437681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.437956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.437985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.438500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.439002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.439031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.439468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.439739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.439768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.613 qpair failed and we were unable to recover it. 00:31:35.613 [2024-06-08 21:27:13.440258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.613 [2024-06-08 21:27:13.440510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.440561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.440942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.441441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.441473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.441980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.442471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.442501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.442986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.443478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.443509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.443985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.444482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.444513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.444902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.445371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.445412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.445909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.446411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.446442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.446976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.447615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.447721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.448277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.448734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.448766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.449256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.449766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.449797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.450357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.450888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.450918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.451180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.451678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.451708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.451998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.452485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.452516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.452786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.453184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.453213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.453703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.454193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.454221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.454738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.455107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.455136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.455503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.456015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.456043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.456538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.457030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.457060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.457563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.458056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.458086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.458596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.459087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.459117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.459609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.459879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.459910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.460397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.460741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.460772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.461266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.461513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.461541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.461923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.462425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.462457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.462966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.463326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.463356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.614 qpair failed and we were unable to recover it. 00:31:35.614 [2024-06-08 21:27:13.463831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.614 [2024-06-08 21:27:13.464328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.464358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.464806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.465262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.465292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.465766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.466125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.466154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.466658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.466926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.466956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.467435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.467905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.467934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.468449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.468806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.468835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.469309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.469806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.469838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.470337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.470803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.470833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.471322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.471813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.471844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.472344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.472808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.472838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.473297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.473787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.473817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.474314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.474809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.474839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.475108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.475378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.475418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.475891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.476387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.476433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.476866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.477245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.477281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.477655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.478191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.478223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.478491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.478792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.478821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.479085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.479570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.479600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.480112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.480360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.480390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.480909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.481397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.481442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.481946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.482469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.482524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.483048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.483425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.483459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.483857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.484094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.484124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.484640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.485028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.485070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.485590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.486095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.486125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.615 qpair failed and we were unable to recover it. 00:31:35.615 [2024-06-08 21:27:13.486441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.615 [2024-06-08 21:27:13.486707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.486735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.487216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.487777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.487881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.488483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.488988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.489017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.489515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.489977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.490006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.490487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.490999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.491026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.491510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.492054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.492083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.492452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.492972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.492998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.493471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.493852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.493880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.494364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.494678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.494707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.495211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.495466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.495495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.495876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.496274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.496301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.496650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.497120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.497146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.497633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.498102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.498128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.498632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.499124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.499151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.499545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.500042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.500069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.500572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.501033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.501060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.501537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.502014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.502040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.502578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.502849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.502875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.503417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.503921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.503948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.504229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.504712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.504739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.505227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.505792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.505895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.506660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.507260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.507298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.507780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.508346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.508373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.508892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.509361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.509387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.616 [2024-06-08 21:27:13.509888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.510164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.616 [2024-06-08 21:27:13.510190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.616 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.510849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.511448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.511490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.511897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.512375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.512431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.512799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.513322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.513350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.513915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.514399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.514442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.514958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.515493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.515521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.515943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.516467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.516517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.516954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.517221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.517247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.517746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.518258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.518285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.518557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.519040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.519068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.519571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.519944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.519982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.520355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.520875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.520905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.521165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.521464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.521491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.521966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.522465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.522494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.523007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.523469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.523498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.523984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.524350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.524376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.524829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.525075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.525102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.525475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.525845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.525872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.526213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.526705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.526733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.527233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.527721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.527749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.528187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.528768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.528869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.529459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.530039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.530068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.530456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.530748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.530775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.531189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.531678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.531708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.532196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.532657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.532685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.617 qpair failed and we were unable to recover it. 00:31:35.617 [2024-06-08 21:27:13.533167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.617 [2024-06-08 21:27:13.533724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.533826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.534382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.534896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.534926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.535444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.535793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.535822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.535978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.536393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.536445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.536968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.537332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.537359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.537749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.538218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.538246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.538767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.539231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.539258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.539697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.540183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.540209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.540688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.541112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.541151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.541614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.542188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.542215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.542811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.543453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.543494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.544020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.544489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.544518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.545019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.545486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.545514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.545992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.546459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.546487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.546990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.547523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.547551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.548063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.548554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.548582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.548875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.549173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.549200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.549724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.550187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.550214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.550592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.550850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.550893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.551209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.551704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.618 [2024-06-08 21:27:13.551733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.618 qpair failed and we were unable to recover it. 00:31:35.618 [2024-06-08 21:27:13.552273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.552624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.552652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.553091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.553384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.553439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.553926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.554176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.554204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.554598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.554855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.554892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.555153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.555531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.555559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.555959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.556439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.556467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.556967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.557436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.557464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.557950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.558433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.558462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.558776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.559140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.559176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.559723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.560218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.560245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.560728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.561190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.561217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.561697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.562164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.562192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.562717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.563210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.563237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.563852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.564478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.564543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.564867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.565232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.565259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.565708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.566105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.566132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.566512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.567004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.567031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.567286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.567689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.567718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.568229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.568699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.568727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.569227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.569733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.569762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.570272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.570609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.570636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.571108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.571649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.571678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.572085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.572484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.572528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.573048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.573516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.573543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.574048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.574511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.619 [2024-06-08 21:27:13.574538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.619 qpair failed and we were unable to recover it. 00:31:35.619 [2024-06-08 21:27:13.575044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.575509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.575538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.576044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.576516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.576544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.577054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.577527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.577554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.578111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.578578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.578606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.579099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.579568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.579597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.580095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.580470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.580498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.580655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.580999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.581025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.581537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.581949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.581975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.582529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.583009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.583038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.583297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.583786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.583814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.584312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.584801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.584829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.585089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.585234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.585261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.585553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.586071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.586098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.586604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.587070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.587097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.587394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.587890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.587917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.588094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.588462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.588491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.588807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.589149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.589176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.589674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.590144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.590170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.590573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.591060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.591090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.591358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.591645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.591673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.592151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.592510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.592539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.592884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.593260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.593288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.593769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.594263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.594289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.594812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.595276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.620 [2024-06-08 21:27:13.595303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.620 qpair failed and we were unable to recover it. 00:31:35.620 [2024-06-08 21:27:13.595830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.596297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.596325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.596609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.596912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.596939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.597435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.597903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.597931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.598456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.598955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.598982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.599458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.599923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.599951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.600503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.600976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.601003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.601501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.601966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.601993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.602375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.602728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.602757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.603104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.603494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.603521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.604108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.604573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.604601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.605020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.605508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.605536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.605928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.606425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.606453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.606919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.607387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.607427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.607939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.608315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.608341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.608702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.609220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.609248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.609729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.610242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.610268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.610753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.611181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.611209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.611690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.612145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.612173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.612647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.613143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.613171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.613780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.614383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.614442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.614795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.615264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.615292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.615570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.616075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.616105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.616546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.616814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.616841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.617213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.617686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.617716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.618050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.618447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.621 [2024-06-08 21:27:13.618477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.621 qpair failed and we were unable to recover it. 00:31:35.621 [2024-06-08 21:27:13.619076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.619348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.619375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.619850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.620363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.620390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.620711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.621102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.621129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.621636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.622100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.622129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.622436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.622968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.622996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.623482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.624035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.624063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.624457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.624958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.624985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.625463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.625955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.625981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.626354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.626804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.626834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.627343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.627802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.627835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.628306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.628786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.628816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.629298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.629773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.629803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.630300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.630569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.630597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.631099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.631560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.631588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.632079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.632545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.632572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.633060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.633595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.633624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.634126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.634537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.634564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.635058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.635524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.635553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.635951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.636455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.636484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.636963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.637507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.637537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.638037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.638422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.638452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.638949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.639422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.639451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.639931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.640294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.640321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.640797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.641306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.641334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.641894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.642356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.622 [2024-06-08 21:27:13.642382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.622 qpair failed and we were unable to recover it. 00:31:35.622 [2024-06-08 21:27:13.642662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.643119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.643149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.643657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.644255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.644293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.644850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.645342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.645369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.645868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.646335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.646363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.646788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.647260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.647287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.647766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.648162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.648190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.648327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.648695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.648723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.649237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.649700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.649729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.650212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.650732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.650835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.651439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.651946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.651975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.652475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.652970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.652999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.653491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.653768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.653795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.654321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.654699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.654728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.655286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.655784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.655813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.656295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.656792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.656819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.657321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.657790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.657817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.658244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.658831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.658933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.659434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.659773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.659801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.660085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.660347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.660376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.660861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.661235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.661261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.661640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.662136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.662164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.623 qpair failed and we were unable to recover it. 00:31:35.623 [2024-06-08 21:27:13.662517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.623 [2024-06-08 21:27:13.662693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.662721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.663181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.663571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.663612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.663890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.664304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.664331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.664834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.665106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.665134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.665647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.666122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.666150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.666649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.667121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.667149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 21:27:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:35.624 [2024-06-08 21:27:13.667641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 21:27:13 -- common/autotest_common.sh@852 -- # return 0 00:31:35.624 21:27:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:35.624 [2024-06-08 21:27:13.668112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.668139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 21:27:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:35.624 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.624 [2024-06-08 21:27:13.668646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.669147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.669176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.669684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.670338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.670389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.670937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.671470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.671526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.672096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.672657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.672760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.673359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.673747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.673779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.674263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.674517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.674545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.674820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.675289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.675315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.675847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.676310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.676340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.676850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.677321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.677349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.677859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.678365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.678393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.678838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.679085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.679110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.679610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.679999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.680026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.680553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.681032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.681059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.681477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.681987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.682017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.682501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.683019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.683046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.683544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.683925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.683951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.684353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.684859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.684888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.624 qpair failed and we were unable to recover it. 00:31:35.624 [2024-06-08 21:27:13.685341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.624 [2024-06-08 21:27:13.685896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.685924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.686659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.687257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.687297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.687868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.688126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.688153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.688541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.689020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.689049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.689209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.689663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.689691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.690211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.690706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.690735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.691017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.691505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.691536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.691806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.692283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.692308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.692589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.693086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.693112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.693490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.693992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.694019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.694627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.695072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.695100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.695524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.696014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.696041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.696521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.696986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.697013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.697428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.697939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.697967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.698381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.698874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.698902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.699347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.699602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.699632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.625 [2024-06-08 21:27:13.699890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.700353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.625 [2024-06-08 21:27:13.700381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.625 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.700774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.701277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.701305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.701802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.702272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.702299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.702642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.703131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.703158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.703639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.704139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.704167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.704737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.705227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.705257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.705775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.706238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.706265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.706762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.707229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.707257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.887 qpair failed and we were unable to recover it. 00:31:35.887 [2024-06-08 21:27:13.707514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.887 [2024-06-08 21:27:13.707901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.707928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.708434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.708864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.708891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 21:27:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.888 [2024-06-08 21:27:13.709377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 21:27:13 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.888 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.888 [2024-06-08 21:27:13.709858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.709886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.888 [2024-06-08 21:27:13.710411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.710905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.710933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.711300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.711759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.711789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.712273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.712752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.712780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.713280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.713779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.713806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.714288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.714610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.714646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.714905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.715142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.715168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.715690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.716063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.716090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.716620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.716973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.717004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.717507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.718027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.718055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.718456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.718965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.718994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.719429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.719805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.719831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.720323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.720790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.720820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.721303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.721788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.721816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.722296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.722773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.722801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.723274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.723545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.723574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.724048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.724587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.724614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.725130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.725599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.725627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.726007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.726518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.726546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.727017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.727389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.727431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.727956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.728372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.728397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.728989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.729644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.729748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.888 qpair failed and we were unable to recover it. 00:31:35.888 [2024-06-08 21:27:13.730259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.888 [2024-06-08 21:27:13.730812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.730843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.731329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.731800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.731829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.732315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 Malloc0 00:31:35.889 [2024-06-08 21:27:13.732811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.732839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.889 [2024-06-08 21:27:13.733349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.733539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.733572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 21:27:13 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:35.889 [2024-06-08 21:27:13.733966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.889 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.889 [2024-06-08 21:27:13.734334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.734372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.734885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.735353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.735392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.735791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.736303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.736331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.736840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.737246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.737274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.737792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.738277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.738304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.738573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.738973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.739000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.739498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.739921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.889 [2024-06-08 21:27:13.739977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.740004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.740400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.740796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.740827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.741208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.741673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.741702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.742090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.742588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.742617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.743087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.743561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.743589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.744153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.744699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.744737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.745283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.745747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.745775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.746152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.746619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.746646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.747146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.747711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.747813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.748286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.748785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.748816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.889 [2024-06-08 21:27:13.749315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 21:27:13 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.889 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.889 [2024-06-08 21:27:13.749794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.749824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.889 [2024-06-08 21:27:13.750323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.750816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.750846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.751340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.751861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.751890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.889 qpair failed and we were unable to recover it. 00:31:35.889 [2024-06-08 21:27:13.752210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.889 [2024-06-08 21:27:13.752587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.752627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.753132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.753698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.753799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.754394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.754910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.754940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.755241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.755836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.755939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.756333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.756897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.757002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.757691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.758280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.758319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.758844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.759218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.759257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.759827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.760298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.760325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.760854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.890 [2024-06-08 21:27:13.761353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.761381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 21:27:13 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.761675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.890 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.890 [2024-06-08 21:27:13.762163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.762190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.762712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.763178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.763206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.763813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.764436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.764475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.764989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.765652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.765755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.766346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.766905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.766937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.767306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.767905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.768008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.768682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.769273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.769310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.769556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.770054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.770082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.770568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.770942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.770969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.771233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.771719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.771748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.772246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.772812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.772917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.890 [2024-06-08 21:27:13.773507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 21:27:13 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.890 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.890 [2024-06-08 21:27:13.774018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.774060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.890 [2024-06-08 21:27:13.774537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.774701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.774727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.775266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.775761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.775789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.890 qpair failed and we were unable to recover it. 00:31:35.890 [2024-06-08 21:27:13.776294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.890 [2024-06-08 21:27:13.776666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.776693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.777195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.777665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.777693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.778169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.778657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.778686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.779198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.779788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 [2024-06-08 21:27:13.779891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f050c000b90 with addr=10.0.0.2, port=4420 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.780350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.891 [2024-06-08 21:27:13.780654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.891 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.891 21:27:13 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:35.891 21:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.891 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:31:35.891 [2024-06-08 21:27:13.787760] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:31:35.891 [2024-06-08 21:27:13.787878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f050c000b90 (107): Transport endpoint is not connected 00:31:35.891 [2024-06-08 21:27:13.787988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.790930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.791221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.791302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.791342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.791363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.791442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 21:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.891 21:27:13 -- host/target_disconnect.sh@58 -- # wait 2583479 00:31:35.891 [2024-06-08 21:27:13.800746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.800912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.800954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.800972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.800986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.801022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.810799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.810946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.810981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.810993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.811002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.811030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.820665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.820793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.820823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.820833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.820840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.820862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.830768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.830880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.830909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.830919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.830926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.830953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.840795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.840912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.840942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.840952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.840958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.840980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.850731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.850840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.850870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.850879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.850885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.850907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.860706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.860836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.860866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.860876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.860883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.891 [2024-06-08 21:27:13.860905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.891 qpair failed and we were unable to recover it. 00:31:35.891 [2024-06-08 21:27:13.870806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.891 [2024-06-08 21:27:13.870926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.891 [2024-06-08 21:27:13.870968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.891 [2024-06-08 21:27:13.870979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.891 [2024-06-08 21:27:13.870986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.871014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.880877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.880996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.881034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.881044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.881050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.881073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.890845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.890958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.890988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.890999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.891005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.891027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.900915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.901035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.901064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.901073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.901079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.901101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.910904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.911007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.911035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.911045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.911052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.911072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.920840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.920967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.920996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.921005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.921017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.921039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.930915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.931035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.931064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.931073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.931080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.931101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.940917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.941028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.941058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.941067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.941074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.941094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.951025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.951141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.951170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.951179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.951186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.951208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.961054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.961169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.961198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.961207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.892 [2024-06-08 21:27:13.961213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.892 [2024-06-08 21:27:13.961234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.892 qpair failed and we were unable to recover it. 00:31:35.892 [2024-06-08 21:27:13.971063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.892 [2024-06-08 21:27:13.971178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.892 [2024-06-08 21:27:13.971208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.892 [2024-06-08 21:27:13.971217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.893 [2024-06-08 21:27:13.971223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:35.893 [2024-06-08 21:27:13.971244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:35.893 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:13.981121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:13.981231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:13.981261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:13.981270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.154 [2024-06-08 21:27:13.981277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.154 [2024-06-08 21:27:13.981298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.154 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:13.991126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:13.991233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:13.991262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:13.991272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.154 [2024-06-08 21:27:13.991279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.154 [2024-06-08 21:27:13.991300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.154 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:14.001209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:14.001357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:14.001385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:14.001393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.154 [2024-06-08 21:27:14.001400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.154 [2024-06-08 21:27:14.001430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.154 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:14.011224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:14.011334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:14.011363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:14.011373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.154 [2024-06-08 21:27:14.011386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.154 [2024-06-08 21:27:14.011419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.154 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:14.021246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:14.021375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:14.021412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:14.021424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.154 [2024-06-08 21:27:14.021431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.154 [2024-06-08 21:27:14.021452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.154 qpair failed and we were unable to recover it. 00:31:36.154 [2024-06-08 21:27:14.031306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.154 [2024-06-08 21:27:14.031428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.154 [2024-06-08 21:27:14.031458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.154 [2024-06-08 21:27:14.031467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.031474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.031495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.041311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.041544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.041573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.041583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.041590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.041612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.051317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.051455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.051486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.051501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.051508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.051531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.061496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.061630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.061660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.061671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.061678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.061701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.071500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.071616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.071645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.071656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.071662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.071684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.081539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.081657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.081686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.081697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.081704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.081725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.091520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.091633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.091661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.091671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.091678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.091699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.101510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.101630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.101659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.101675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.101682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.101706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.111556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.111661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.111690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.111699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.111706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.111727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.121560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.121668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.121697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.121706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.121713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.121734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.131507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.131614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.131642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.131651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.131659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.131680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.141637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.155 [2024-06-08 21:27:14.141756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.155 [2024-06-08 21:27:14.141784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.155 [2024-06-08 21:27:14.141793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.155 [2024-06-08 21:27:14.141800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.155 [2024-06-08 21:27:14.141820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.155 qpair failed and we were unable to recover it. 00:31:36.155 [2024-06-08 21:27:14.151638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.151747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.151775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.151784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.151791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.151812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.161754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.161879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.161909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.161919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.161925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.161947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.171708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.171815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.171845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.171854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.171861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.171881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.181765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.181881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.181912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.181921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.181928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.181950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.191659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.191795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.191825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.191841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.191847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.191871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.201712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.201821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.201851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.201861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.201867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.201888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.211840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.211951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.211981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.211990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.211997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.212019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.221852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.221977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.222018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.222030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.222037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.222064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.231760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.231872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.231913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.231924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.231931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.231958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.156 [2024-06-08 21:27:14.241921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.156 [2024-06-08 21:27:14.242036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.156 [2024-06-08 21:27:14.242068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.156 [2024-06-08 21:27:14.242079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.156 [2024-06-08 21:27:14.242085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.156 [2024-06-08 21:27:14.242110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.156 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.251976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.252093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.252135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.252147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.252155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.252182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.261989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.262101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.262132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.262142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.262149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.262171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.272066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.272212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.272253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.272265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.272272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.272300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.282032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.282191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.282229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.282239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.282246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.282268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.292068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.292180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.292210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.292222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.292228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.292250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.418 [2024-06-08 21:27:14.302093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.418 [2024-06-08 21:27:14.302204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.418 [2024-06-08 21:27:14.302234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.418 [2024-06-08 21:27:14.302243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.418 [2024-06-08 21:27:14.302249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.418 [2024-06-08 21:27:14.302272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.418 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.312117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.312245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.312274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.312283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.312289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.312312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.322193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.322310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.322339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.322348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.322355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.322384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.332219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.332350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.332378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.332387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.332393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.332423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.342221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.342350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.342380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.342389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.342396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.342430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.352245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.352360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.352388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.352397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.352414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.352435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.362317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.362437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.362466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.362476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.362482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.362504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.372326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.372435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.372471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.372481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.372488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.372508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.382367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.382501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.382532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.382541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.382548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.382569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.392379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.392510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.392539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.392549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.392555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.392576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.402425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.402539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.402567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.402577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.402583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.402605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.412447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.412559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.412587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.412597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.419 [2024-06-08 21:27:14.412611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.419 [2024-06-08 21:27:14.412632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.419 qpair failed and we were unable to recover it. 00:31:36.419 [2024-06-08 21:27:14.422491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.419 [2024-06-08 21:27:14.422604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.419 [2024-06-08 21:27:14.422633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.419 [2024-06-08 21:27:14.422643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.422649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.422670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.432529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.432670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.432698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.432707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.432714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.432735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.442678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.442790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.442820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.442829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.442835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.442857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.452565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.452670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.452698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.452708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.452714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.452735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.462543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.462673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.462702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.462712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.462719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.462741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.472524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.472639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.472669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.472678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.472684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.472706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.482575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.482676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.482704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.482713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.482720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.482742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.492705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.492812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.492840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.492849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.492856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.492877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.420 [2024-06-08 21:27:14.502775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.420 [2024-06-08 21:27:14.502893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.420 [2024-06-08 21:27:14.502921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.420 [2024-06-08 21:27:14.502932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.420 [2024-06-08 21:27:14.502945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.420 [2024-06-08 21:27:14.502967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.420 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.512835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.512975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.513016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.513027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.513034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.513061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.522838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.522952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.522992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.523003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.523010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.523037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.532826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.532952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.532983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.532992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.532998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.533022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.542845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.542956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.542985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.542995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.543001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.543023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.552946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.553058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.553087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.553096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.553103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.553124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.562962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.563067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.563095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.563104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.563111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.563132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.572960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.573068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.573097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.573106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.573112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.573133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.583004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.583157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.583185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.583194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.683 [2024-06-08 21:27:14.583200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.683 [2024-06-08 21:27:14.583221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.683 qpair failed and we were unable to recover it. 00:31:36.683 [2024-06-08 21:27:14.593037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.683 [2024-06-08 21:27:14.593158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.683 [2024-06-08 21:27:14.593197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.683 [2024-06-08 21:27:14.593216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.593223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.593251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.603100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.603206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.603236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.603246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.603253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.603275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.613119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.613250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.613279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.613288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.613295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.613316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.623114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.623246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.623275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.623284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.623291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.623311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.633159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.633286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.633315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.633324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.633330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.633351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.643177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.643324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.643353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.643362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.643368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.643388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.653221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.653324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.653353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.653362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.653368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.653389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.663283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.663388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.663423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.663433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.663440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.663461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.673362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.673482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.673512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.673521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.673527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.673549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.683302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.683421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.683450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.683466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.683473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.683494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.693318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.693431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.693460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.693470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.693476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.693497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.684 [2024-06-08 21:27:14.703406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.684 [2024-06-08 21:27:14.703523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.684 [2024-06-08 21:27:14.703552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.684 [2024-06-08 21:27:14.703561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.684 [2024-06-08 21:27:14.703568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.684 [2024-06-08 21:27:14.703590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.684 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.713385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.713499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.713528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.713538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.713544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.713565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.723400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.723503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.723532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.723541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.723547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.723568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.733395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.733509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.733537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.733547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.733553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.733574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.743422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.743531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.743560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.743570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.743577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.743597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.753508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.753639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.753668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.753678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.753684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.753704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.685 [2024-06-08 21:27:14.763578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.685 [2024-06-08 21:27:14.763714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.685 [2024-06-08 21:27:14.763743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.685 [2024-06-08 21:27:14.763752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.685 [2024-06-08 21:27:14.763759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.685 [2024-06-08 21:27:14.763781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.685 qpair failed and we were unable to recover it. 00:31:36.947 [2024-06-08 21:27:14.773603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.947 [2024-06-08 21:27:14.773717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.947 [2024-06-08 21:27:14.773753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.947 [2024-06-08 21:27:14.773762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.947 [2024-06-08 21:27:14.773768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.947 [2024-06-08 21:27:14.773789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.947 qpair failed and we were unable to recover it. 00:31:36.947 [2024-06-08 21:27:14.783637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.947 [2024-06-08 21:27:14.783762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.947 [2024-06-08 21:27:14.783792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.947 [2024-06-08 21:27:14.783801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.947 [2024-06-08 21:27:14.783807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.947 [2024-06-08 21:27:14.783827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.947 qpair failed and we were unable to recover it. 00:31:36.947 [2024-06-08 21:27:14.793544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.947 [2024-06-08 21:27:14.793654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.947 [2024-06-08 21:27:14.793686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.947 [2024-06-08 21:27:14.793695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.947 [2024-06-08 21:27:14.793703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.947 [2024-06-08 21:27:14.793725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.947 qpair failed and we were unable to recover it. 00:31:36.947 [2024-06-08 21:27:14.803663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.947 [2024-06-08 21:27:14.803791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.803823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.803836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.803843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.803865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.813724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.813831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.813861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.813870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.813876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.813911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.823745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.823862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.823891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.823900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.823907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.823927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.833656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.833758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.833788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.833798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.833805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.833826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.843694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.843798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.843827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.843836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.843843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.843864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.853864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.853966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.853995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.854004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.854011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.854031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.863952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.864084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.864119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.864128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.864134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.864155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.873964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.874117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.874149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.874158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.874166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.874191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.883928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.884052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.884092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.884103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.884110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.884137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.893984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.894088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.894120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.894131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.894138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.894162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.904004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.904115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.904144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.904154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.904160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.904190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.948 qpair failed and we were unable to recover it. 00:31:36.948 [2024-06-08 21:27:14.914063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.948 [2024-06-08 21:27:14.914167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.948 [2024-06-08 21:27:14.914196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.948 [2024-06-08 21:27:14.914206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.948 [2024-06-08 21:27:14.914212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.948 [2024-06-08 21:27:14.914234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.924096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.924205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.924245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.924256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.924262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.924291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.934125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.934238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.934279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.934290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.934297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.934325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.944153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.944266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.944297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.944306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.944313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.944335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.954199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.954299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.954335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.954345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.954352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.954373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.964226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.964334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.964362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.964371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.964378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.964399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.974279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.974381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.974417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.974427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.974434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.974456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.984314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.984429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.984459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.984468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.984475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.984497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:14.994320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:14.994426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:14.994455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:14.994464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:14.994477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:14.994499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:15.004297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:15.004398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:15.004435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:15.004444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:15.004451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:15.004472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:15.014479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:15.014612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:15.014641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:15.014650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:15.014656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:15.014677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:15.024444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:15.024565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:15.024593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.949 [2024-06-08 21:27:15.024603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.949 [2024-06-08 21:27:15.024609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.949 [2024-06-08 21:27:15.024631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.949 qpair failed and we were unable to recover it. 00:31:36.949 [2024-06-08 21:27:15.034481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.949 [2024-06-08 21:27:15.034579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.949 [2024-06-08 21:27:15.034608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.950 [2024-06-08 21:27:15.034618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.950 [2024-06-08 21:27:15.034624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:36.950 [2024-06-08 21:27:15.034645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.950 qpair failed and we were unable to recover it. 00:31:37.212 [2024-06-08 21:27:15.044531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.044685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.044714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.044723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.044729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.044751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.054449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.054557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.054587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.054596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.054603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.054625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.064460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.064570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.064598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.064608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.064615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.064636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.074625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.074772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.074801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.074810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.074817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.074838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.084668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.084771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.084800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.084809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.084823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.084844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.094623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.094731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.094759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.094769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.094775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.094796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.104711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.104831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.104871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.104882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.104889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.104917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.114719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.114839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.114869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.114878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.114885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.114908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.124751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.124864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.124895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.124905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.124911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.124933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.134800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.134920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.134961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.134972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.134979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.213 [2024-06-08 21:27:15.135006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.213 qpair failed and we were unable to recover it. 00:31:37.213 [2024-06-08 21:27:15.144814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.213 [2024-06-08 21:27:15.144931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.213 [2024-06-08 21:27:15.144971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.213 [2024-06-08 21:27:15.144982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.213 [2024-06-08 21:27:15.144990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.145018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.154850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.154977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.155018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.155030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.155037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.155064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.164900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.165013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.165045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.165055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.165062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.165084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.174898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.175130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.175170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.175187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.175195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.175222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.184972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.185096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.185136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.185147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.185154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.185182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.195003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.195126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.195167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.195177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.195184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.195211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.205041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.205162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.205194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.205203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.205210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.205232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.215101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.215218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.215248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.215257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.215264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.215285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.225087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.225199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.225230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.225239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.225246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.225268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.235134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.235255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.235285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.235294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.235301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.235321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.245121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.245229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.245259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.245269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.245275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.245296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.255198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.255301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.255332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.214 [2024-06-08 21:27:15.255341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.214 [2024-06-08 21:27:15.255348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.214 [2024-06-08 21:27:15.255368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.214 qpair failed and we were unable to recover it. 00:31:37.214 [2024-06-08 21:27:15.265225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.214 [2024-06-08 21:27:15.265346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.214 [2024-06-08 21:27:15.265383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.215 [2024-06-08 21:27:15.265393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.215 [2024-06-08 21:27:15.265399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.215 [2024-06-08 21:27:15.265431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.215 qpair failed and we were unable to recover it. 00:31:37.215 [2024-06-08 21:27:15.275261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.215 [2024-06-08 21:27:15.275393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.215 [2024-06-08 21:27:15.275430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.215 [2024-06-08 21:27:15.275442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.215 [2024-06-08 21:27:15.275449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.215 [2024-06-08 21:27:15.275469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.215 qpair failed and we were unable to recover it. 00:31:37.215 [2024-06-08 21:27:15.285161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.215 [2024-06-08 21:27:15.285266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.215 [2024-06-08 21:27:15.285294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.215 [2024-06-08 21:27:15.285303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.215 [2024-06-08 21:27:15.285310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.215 [2024-06-08 21:27:15.285331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.215 qpair failed and we were unable to recover it. 00:31:37.215 [2024-06-08 21:27:15.295420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.215 [2024-06-08 21:27:15.295645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.215 [2024-06-08 21:27:15.295673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.215 [2024-06-08 21:27:15.295681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.215 [2024-06-08 21:27:15.295687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.215 [2024-06-08 21:27:15.295706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.215 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.305334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.305459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.305489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.305498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.305505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.305534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.315292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.315398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.315435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.315445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.315451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.315471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.325393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.325505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.325534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.325544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.325550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.325572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.335474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.335622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.335651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.335659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.335666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.335687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.345381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.345539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.345568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.345576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.345583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.345603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.355332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.355452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.355489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.355498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.355504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.355527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.481 qpair failed and we were unable to recover it. 00:31:37.481 [2024-06-08 21:27:15.365531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.481 [2024-06-08 21:27:15.365680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.481 [2024-06-08 21:27:15.365708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.481 [2024-06-08 21:27:15.365717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.481 [2024-06-08 21:27:15.365724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.481 [2024-06-08 21:27:15.365745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.375551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.375655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.375684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.375693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.375700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.375720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.385457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.385583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.385613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.385622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.385629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.385651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.395513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.395625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.395654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.395664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.395671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.395701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.405568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.405667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.405697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.405706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.405713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.405734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.415633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.415741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.415770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.415781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.415788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.415809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.425610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.425733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.425762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.425772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.425778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.425799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.435591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.435694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.435723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.435732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.435739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.435760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.445719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.445853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.445888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.445898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.482 [2024-06-08 21:27:15.445904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.482 [2024-06-08 21:27:15.445925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.482 qpair failed and we were unable to recover it. 00:31:37.482 [2024-06-08 21:27:15.455926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.482 [2024-06-08 21:27:15.456042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.482 [2024-06-08 21:27:15.456070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.482 [2024-06-08 21:27:15.456079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.456086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.456107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.465795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.465911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.465951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.465963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.465969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.465997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.475823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.475940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.475981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.475993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.476000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.476028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.485860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.485979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.486020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.486031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.486047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.486074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.495901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.496013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.496055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.496065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.496072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.496101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.505980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.506108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.506140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.506150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.506156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.506179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.515952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.516188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.516218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.516228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.516235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.516256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.525999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.526105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.526135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.526144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.526151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.526172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.535999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.483 [2024-06-08 21:27:15.536109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.483 [2024-06-08 21:27:15.536138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.483 [2024-06-08 21:27:15.536147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.483 [2024-06-08 21:27:15.536154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.483 [2024-06-08 21:27:15.536175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.483 qpair failed and we were unable to recover it. 00:31:37.483 [2024-06-08 21:27:15.546019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.484 [2024-06-08 21:27:15.546133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.484 [2024-06-08 21:27:15.546162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.484 [2024-06-08 21:27:15.546172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.484 [2024-06-08 21:27:15.546178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.484 [2024-06-08 21:27:15.546199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.484 qpair failed and we were unable to recover it. 00:31:37.484 [2024-06-08 21:27:15.556161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.484 [2024-06-08 21:27:15.556290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.484 [2024-06-08 21:27:15.556331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.484 [2024-06-08 21:27:15.556342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.484 [2024-06-08 21:27:15.556349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.484 [2024-06-08 21:27:15.556376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.484 qpair failed and we were unable to recover it. 00:31:37.484 [2024-06-08 21:27:15.566117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.484 [2024-06-08 21:27:15.566229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.484 [2024-06-08 21:27:15.566262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.484 [2024-06-08 21:27:15.566273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.484 [2024-06-08 21:27:15.566279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.484 [2024-06-08 21:27:15.566301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.484 qpair failed and we were unable to recover it. 00:31:37.759 [2024-06-08 21:27:15.576159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.759 [2024-06-08 21:27:15.576265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.759 [2024-06-08 21:27:15.576295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.759 [2024-06-08 21:27:15.576305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.759 [2024-06-08 21:27:15.576326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.759 [2024-06-08 21:27:15.576349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.759 qpair failed and we were unable to recover it. 00:31:37.759 [2024-06-08 21:27:15.586154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.759 [2024-06-08 21:27:15.586263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.759 [2024-06-08 21:27:15.586294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.759 [2024-06-08 21:27:15.586303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.759 [2024-06-08 21:27:15.586310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.759 [2024-06-08 21:27:15.586331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.759 qpair failed and we were unable to recover it. 00:31:37.759 [2024-06-08 21:27:15.596226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.759 [2024-06-08 21:27:15.596350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.759 [2024-06-08 21:27:15.596378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.759 [2024-06-08 21:27:15.596387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.596393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.596423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.606244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.606364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.606393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.606412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.606420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.606441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.616247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.616356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.616384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.616394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.616400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.616431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.626255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.626388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.626428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.626439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.626445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.626468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.636303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.636459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.636487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.636497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.636503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.636524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.646292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.646420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.646448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.646458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.646465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.646487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.656391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.656563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.656593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.656603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.656609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.656631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.666352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.666473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.666502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.666518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.666526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.666546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.676384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.676621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.676649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.676658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.676664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.676684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.686437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.686550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.686578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.686587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.686594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.686614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.696414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.696523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.696551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.696561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.696568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.696589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.706655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.760 [2024-06-08 21:27:15.706789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.760 [2024-06-08 21:27:15.706818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.760 [2024-06-08 21:27:15.706827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.760 [2024-06-08 21:27:15.706833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.760 [2024-06-08 21:27:15.706855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.760 qpair failed and we were unable to recover it. 00:31:37.760 [2024-06-08 21:27:15.716535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.716635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.716663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.716673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.716679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.716701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.726519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.726635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.726656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.726664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.726671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.726688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.736536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.736655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.736683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.736693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.736699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.736721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.746653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.746785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.746813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.746822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.746828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.746848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.756677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.756783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.756812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.756828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.756835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.756855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.766724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.766828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.766856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.766866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.766872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.766893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.776684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.776796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.776824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.776832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.776839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.776859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.786761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.786894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.786922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.786930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.786937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.786957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.796818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.796924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.796952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.796961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.796968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.796989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.806820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.806935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.806976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.806988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.806995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.807023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.816857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.816969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.817010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.817021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.761 [2024-06-08 21:27:15.817028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.761 [2024-06-08 21:27:15.817055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.761 qpair failed and we were unable to recover it. 00:31:37.761 [2024-06-08 21:27:15.826918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.761 [2024-06-08 21:27:15.827048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.761 [2024-06-08 21:27:15.827089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.761 [2024-06-08 21:27:15.827099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.762 [2024-06-08 21:27:15.827106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.762 [2024-06-08 21:27:15.827134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.762 qpair failed and we were unable to recover it. 00:31:37.762 [2024-06-08 21:27:15.836959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.762 [2024-06-08 21:27:15.837074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.762 [2024-06-08 21:27:15.837106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.762 [2024-06-08 21:27:15.837116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.762 [2024-06-08 21:27:15.837123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.762 [2024-06-08 21:27:15.837146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.762 qpair failed and we were unable to recover it. 00:31:37.762 [2024-06-08 21:27:15.846967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.762 [2024-06-08 21:27:15.847079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.762 [2024-06-08 21:27:15.847115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.762 [2024-06-08 21:27:15.847125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.762 [2024-06-08 21:27:15.847132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:37.762 [2024-06-08 21:27:15.847154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.762 qpair failed and we were unable to recover it. 00:31:38.024 [2024-06-08 21:27:15.857025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.024 [2024-06-08 21:27:15.857134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.024 [2024-06-08 21:27:15.857162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.024 [2024-06-08 21:27:15.857172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.024 [2024-06-08 21:27:15.857179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.024 [2024-06-08 21:27:15.857199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.024 qpair failed and we were unable to recover it. 00:31:38.024 [2024-06-08 21:27:15.867001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.024 [2024-06-08 21:27:15.867123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.024 [2024-06-08 21:27:15.867153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.024 [2024-06-08 21:27:15.867163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.024 [2024-06-08 21:27:15.867169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.024 [2024-06-08 21:27:15.867190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.024 qpair failed and we were unable to recover it. 00:31:38.024 [2024-06-08 21:27:15.877045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.024 [2024-06-08 21:27:15.877155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.024 [2024-06-08 21:27:15.877184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.024 [2024-06-08 21:27:15.877194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.024 [2024-06-08 21:27:15.877200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.024 [2024-06-08 21:27:15.877221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.024 qpair failed and we were unable to recover it. 00:31:38.024 [2024-06-08 21:27:15.887142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.024 [2024-06-08 21:27:15.887276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.024 [2024-06-08 21:27:15.887304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.024 [2024-06-08 21:27:15.887314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.024 [2024-06-08 21:27:15.887320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.024 [2024-06-08 21:27:15.887348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.024 qpair failed and we were unable to recover it. 00:31:38.024 [2024-06-08 21:27:15.897114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.024 [2024-06-08 21:27:15.897215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.024 [2024-06-08 21:27:15.897245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.897255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.897261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.897283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.907191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.907345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.907375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.907384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.907390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.907425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.917158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.917272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.917301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.917310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.917317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.917337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.927170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.927268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.927296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.927305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.927311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.927332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.937244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.937390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.937432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.937442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.937448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.937469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.947350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.947523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.947552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.947562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.947568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.947590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.957235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.957341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.957368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.957377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.957384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.957414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.967290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.967387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.967422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.967431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.967438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.967458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.977345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.977457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.977484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.977493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.977504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.977524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.987350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.987469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.987495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.987503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.987510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.987528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:15.997229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:15.997328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:15.997353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:15.997362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:15.997368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.025 [2024-06-08 21:27:15.997390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.025 qpair failed and we were unable to recover it. 00:31:38.025 [2024-06-08 21:27:16.007430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.025 [2024-06-08 21:27:16.007513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.025 [2024-06-08 21:27:16.007537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.025 [2024-06-08 21:27:16.007545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.025 [2024-06-08 21:27:16.007551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.007569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.017323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.017429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.017451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.017459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.017466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.017483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.027477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.027706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.027728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.027736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.027743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.027760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.037427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.037538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.037559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.037567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.037573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.037590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.047540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.047636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.047659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.047667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.047673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.047691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.057585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.057705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.057726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.057734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.057740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.057757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.067548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.067659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.067680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.067688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.067699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.067716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.077692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.077790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.077810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.077818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.077824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.077840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.087693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.087785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.087804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.087812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.087818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.087834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.097704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.097848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.097867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.097875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.097881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.097897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.026 [2024-06-08 21:27:16.107610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.026 [2024-06-08 21:27:16.107720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.026 [2024-06-08 21:27:16.107739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.026 [2024-06-08 21:27:16.107747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.026 [2024-06-08 21:27:16.107753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.026 [2024-06-08 21:27:16.107769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.026 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.117676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.117769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.117788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.117796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.117803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.117818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.127740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.127833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.127851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.127860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.127866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.127881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.137793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.137890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.137908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.137916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.137922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.137938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.147784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.147885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.147903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.147910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.147916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.147932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.157781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.157868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.157886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.157897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.157904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.157919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.167846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.167942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.167960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.167968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.167974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.167989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.177864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.177961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.289 [2024-06-08 21:27:16.177978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.289 [2024-06-08 21:27:16.177985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.289 [2024-06-08 21:27:16.177991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.289 [2024-06-08 21:27:16.178006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.289 qpair failed and we were unable to recover it. 00:31:38.289 [2024-06-08 21:27:16.188047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.289 [2024-06-08 21:27:16.188158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.188184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.188193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.188199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.188220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.197910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.198012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.198038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.198048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.198054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.198074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.208024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.208141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.208159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.208167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.208174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.208190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.218069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.218226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.218251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.218260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.218267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.218286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.228033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.228137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.228155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.228163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.228170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.228188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.238015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.238106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.238124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.238131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.238138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.238153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.248031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.248123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.248140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.248152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.248159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.248174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.258117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.258208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.258226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.258234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.258240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.258255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.268154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.268251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.268268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.268275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.268282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.268297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.278014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.278100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.278117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.278124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.278130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.278145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.288203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.288323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.288340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.288347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.288353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.288368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.298226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.290 [2024-06-08 21:27:16.298366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.290 [2024-06-08 21:27:16.298383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.290 [2024-06-08 21:27:16.298390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.290 [2024-06-08 21:27:16.298396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.290 [2024-06-08 21:27:16.298425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.290 qpair failed and we were unable to recover it. 00:31:38.290 [2024-06-08 21:27:16.308311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.308420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.308438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.308445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.308451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.308466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.318244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.318341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.318358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.318365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.318371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.318386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.328306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.328398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.328419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.328426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.328432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.328447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.338322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.338418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.338447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.338454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.338460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.338476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.348340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.348464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.348481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.348488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.348494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.348510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.358379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.358474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.358492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.358499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.358505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.358520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.368393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.368486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.368503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.368511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.368517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.368532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.291 [2024-06-08 21:27:16.378412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.291 [2024-06-08 21:27:16.378505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.291 [2024-06-08 21:27:16.378522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.291 [2024-06-08 21:27:16.378529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.291 [2024-06-08 21:27:16.378535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.291 [2024-06-08 21:27:16.378553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.291 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.388332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.388432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.388449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.388457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.388463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.388478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.398479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.398565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.398582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.398589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.398595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.398610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.408537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.408631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.408648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.408656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.408661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.408676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.418436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.418526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.418543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.418550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.418556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.418571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.428492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.428590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.428611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.428618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.428624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.428640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.438579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.438775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.438792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.438799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.438805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.554 [2024-06-08 21:27:16.438820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.554 qpair failed and we were unable to recover it. 00:31:38.554 [2024-06-08 21:27:16.448632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.554 [2024-06-08 21:27:16.448720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.554 [2024-06-08 21:27:16.448736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.554 [2024-06-08 21:27:16.448743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.554 [2024-06-08 21:27:16.448749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.448764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.458674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.458768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.458785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.458792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.458798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.458814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.468677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.468773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.468789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.468796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.468803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.468821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.478719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.478809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.478826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.478833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.478839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.478854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.488785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.488884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.488900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.488907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.488913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.488928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.498784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.498919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.498936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.498943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.498949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.498964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.508801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.508896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.508913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.508920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.508926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.508941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.518803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.518895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.518915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.518923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.518929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.518943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.528916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.529012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.529029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.529036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.529042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.529057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.538931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.539058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.539075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.539082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.539088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.539103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.548906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.549006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.549032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.549041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.549048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.549068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.558909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.559001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.555 [2024-06-08 21:27:16.559020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.555 [2024-06-08 21:27:16.559028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.555 [2024-06-08 21:27:16.559040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.555 [2024-06-08 21:27:16.559059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.555 qpair failed and we were unable to recover it. 00:31:38.555 [2024-06-08 21:27:16.569002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.555 [2024-06-08 21:27:16.569103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.569120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.569128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.569134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.569149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.578958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.579060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.579086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.579094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.579101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.579121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.588924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.589014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.589033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.589040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.589046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.589063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.599011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.599096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.599112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.599120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.599126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.599142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.609106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.609197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.609215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.609222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.609228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.609244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.619131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.619223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.619241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.619248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.619254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.619272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.629108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.629201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.629218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.629225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.629231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.629246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.556 [2024-06-08 21:27:16.639115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.556 [2024-06-08 21:27:16.639195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.556 [2024-06-08 21:27:16.639212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.556 [2024-06-08 21:27:16.639219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.556 [2024-06-08 21:27:16.639225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.556 [2024-06-08 21:27:16.639240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.556 qpair failed and we were unable to recover it. 00:31:38.819 [2024-06-08 21:27:16.649158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.819 [2024-06-08 21:27:16.649398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.819 [2024-06-08 21:27:16.649419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.819 [2024-06-08 21:27:16.649431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.819 [2024-06-08 21:27:16.649437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.819 [2024-06-08 21:27:16.649453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.819 qpair failed and we were unable to recover it. 00:31:38.819 [2024-06-08 21:27:16.659252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.819 [2024-06-08 21:27:16.659343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.819 [2024-06-08 21:27:16.659360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.819 [2024-06-08 21:27:16.659367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.819 [2024-06-08 21:27:16.659373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.819 [2024-06-08 21:27:16.659388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.819 qpair failed and we were unable to recover it. 00:31:38.819 [2024-06-08 21:27:16.669243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.819 [2024-06-08 21:27:16.669353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.819 [2024-06-08 21:27:16.669370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.819 [2024-06-08 21:27:16.669377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.819 [2024-06-08 21:27:16.669383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.819 [2024-06-08 21:27:16.669398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.819 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.679265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.679366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.679383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.679391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.679397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.679417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.689299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.689414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.689431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.689438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.689444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.689459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.699365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.699456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.699473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.699481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.699487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.699501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.709349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.709449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.709466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.709473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.709479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.709494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.719369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.719458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.719475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.719482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.719488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.719503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.729398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.729499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.729516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.729523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.729529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.729544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.739487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.739579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.739595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.739606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.739612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.739627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.749462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.749662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.749679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.749686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.749692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.749707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.759494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.759572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.759589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.759596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.759602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.759617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.769542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.769635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.769651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.769659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.769665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.769681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.779613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.779709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.820 [2024-06-08 21:27:16.779726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.820 [2024-06-08 21:27:16.779733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.820 [2024-06-08 21:27:16.779739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.820 [2024-06-08 21:27:16.779754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.820 qpair failed and we were unable to recover it. 00:31:38.820 [2024-06-08 21:27:16.789541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.820 [2024-06-08 21:27:16.789648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.789664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.789672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.789678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.789693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.799602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.799733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.799749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.799757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.799763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.799777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.809682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.809775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.809791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.809798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.809805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.809820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.819685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.819814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.819831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.819838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.819844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.819858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.829652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.829745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.829765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.829773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.829779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.829794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.839703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.839795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.839812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.839819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.839825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.839840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.849755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.849866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.849883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.849891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.849897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.849912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.859808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.859902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.859920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.859927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.859933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.859949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.869798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.869887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.869903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.869911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.869917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.869935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.879694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.879786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.879803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.879810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.879816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.879831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.889911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.889998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.890014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.890021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.890028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.890042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:38.821 [2024-06-08 21:27:16.899820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.821 [2024-06-08 21:27:16.899923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.821 [2024-06-08 21:27:16.899949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.821 [2024-06-08 21:27:16.899958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.821 [2024-06-08 21:27:16.899965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:38.821 [2024-06-08 21:27:16.899985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.821 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.909896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.909994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.910013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.910021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.910028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.910049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.919852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.920065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.920087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.920095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.920101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.920117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.929880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.929973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.929990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.929998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.930004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.930019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.940022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.940126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.940142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.940149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.940156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.940170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.950045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.950133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.950150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.950158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.950164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.950178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.960055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.960138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.960155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.960163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.960169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.960191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.970134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.084 [2024-06-08 21:27:16.970220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.084 [2024-06-08 21:27:16.970237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.084 [2024-06-08 21:27:16.970244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.084 [2024-06-08 21:27:16.970250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.084 [2024-06-08 21:27:16.970265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.084 qpair failed and we were unable to recover it. 00:31:39.084 [2024-06-08 21:27:16.980048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:16.980143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:16.980159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:16.980166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:16.980172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:16.980187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:16.990149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:16.990238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:16.990254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:16.990261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:16.990267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:16.990282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.000162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.000245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.000262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.000269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.000275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.000290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.010235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.010328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.010349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.010356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.010362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.010377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.020410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.020504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.020520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.020528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.020534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.020549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.030257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.030390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.030411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.030419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.030425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.030440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.040278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.040366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.040383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.040390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.040396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.040419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.050368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.050461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.050478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.050485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.050495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.050511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.060405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.060503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.060519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.060526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.060533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.060547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.070361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.070463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.070480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.070487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.070493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.070508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.080430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.080528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.080545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.080552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.080558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.085 [2024-06-08 21:27:17.080573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.085 qpair failed and we were unable to recover it. 00:31:39.085 [2024-06-08 21:27:17.090474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.085 [2024-06-08 21:27:17.090568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.085 [2024-06-08 21:27:17.090585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.085 [2024-06-08 21:27:17.090592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.085 [2024-06-08 21:27:17.090598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.090613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.100574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.100686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.100702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.100710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.100716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.100731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.110515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.110613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.110629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.110636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.110642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.110658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.120577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.120702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.120718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.120726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.120732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.120747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.130641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.130764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.130783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.130791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.130797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.130813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.140565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.140660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.140677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.140684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.140693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.140708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.150633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.150723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.150739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.150747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.150753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.150767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.160659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.160748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.160766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.160773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.160780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.160795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.086 [2024-06-08 21:27:17.170603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.086 [2024-06-08 21:27:17.170694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.086 [2024-06-08 21:27:17.170711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.086 [2024-06-08 21:27:17.170719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.086 [2024-06-08 21:27:17.170725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.086 [2024-06-08 21:27:17.170740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.086 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.180757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.180852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.180869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.180876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.180883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.180898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.190720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.190811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.190828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.190836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.190841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.190856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.200642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.200733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.200749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.200757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.200763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.200778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.210831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.210926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.210942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.210950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.210956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.210971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.220864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.220963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.220980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.220988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.220994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.221009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.230883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.230975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.230992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.231002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.231009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.231023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.240892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.240982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.240999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.241006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.241013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.241028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.250956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.251056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.251081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.251090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.251097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.251117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.261000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.261128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.261154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.261164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.261171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.261193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.271006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.349 [2024-06-08 21:27:17.271106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.349 [2024-06-08 21:27:17.271132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.349 [2024-06-08 21:27:17.271141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.349 [2024-06-08 21:27:17.271148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.349 [2024-06-08 21:27:17.271167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.349 qpair failed and we were unable to recover it. 00:31:39.349 [2024-06-08 21:27:17.281016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.281122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.281148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.281157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.281163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.281183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.291130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.291267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.291293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.291302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.291308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.291328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.301151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.301262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.301281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.301290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.301296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.301313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.311151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.311250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.311268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.311276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.311282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.311297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.321142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.321229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.321245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.321258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.321264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.321280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.331213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.331324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.331341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.331349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.331355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.331370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.341200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.341295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.341312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.341319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.341325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.341340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.351231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.351326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.351343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.351351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.351357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.351372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.361269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.361367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.361384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.361391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.361397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.361420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.371339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.371484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.371502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.371509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.371515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.371531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.381385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.381479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.381496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.381503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.381509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.350 [2024-06-08 21:27:17.381525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.350 qpair failed and we were unable to recover it. 00:31:39.350 [2024-06-08 21:27:17.391342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.350 [2024-06-08 21:27:17.391441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.350 [2024-06-08 21:27:17.391458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.350 [2024-06-08 21:27:17.391465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.350 [2024-06-08 21:27:17.391471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.351 [2024-06-08 21:27:17.391487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.351 qpair failed and we were unable to recover it. 00:31:39.351 [2024-06-08 21:27:17.401271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.351 [2024-06-08 21:27:17.401361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.351 [2024-06-08 21:27:17.401378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.351 [2024-06-08 21:27:17.401385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.351 [2024-06-08 21:27:17.401392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.351 [2024-06-08 21:27:17.401412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.351 qpair failed and we were unable to recover it. 00:31:39.351 [2024-06-08 21:27:17.411419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.351 [2024-06-08 21:27:17.411508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.351 [2024-06-08 21:27:17.411528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.351 [2024-06-08 21:27:17.411535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.351 [2024-06-08 21:27:17.411541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.351 [2024-06-08 21:27:17.411557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.351 qpair failed and we were unable to recover it. 00:31:39.351 [2024-06-08 21:27:17.421482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.351 [2024-06-08 21:27:17.421575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.351 [2024-06-08 21:27:17.421592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.351 [2024-06-08 21:27:17.421599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.351 [2024-06-08 21:27:17.421606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.351 [2024-06-08 21:27:17.421621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.351 qpair failed and we were unable to recover it. 00:31:39.351 [2024-06-08 21:27:17.431448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.351 [2024-06-08 21:27:17.431540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.351 [2024-06-08 21:27:17.431557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.351 [2024-06-08 21:27:17.431564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.351 [2024-06-08 21:27:17.431570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.351 [2024-06-08 21:27:17.431585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.351 qpair failed and we were unable to recover it. 00:31:39.613 [2024-06-08 21:27:17.441454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.613 [2024-06-08 21:27:17.441540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.613 [2024-06-08 21:27:17.441557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.613 [2024-06-08 21:27:17.441565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.613 [2024-06-08 21:27:17.441571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.613 [2024-06-08 21:27:17.441586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.613 qpair failed and we were unable to recover it. 00:31:39.613 [2024-06-08 21:27:17.451547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.451647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.451664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.451671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.451678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.451696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.461542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.461634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.461652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.461660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.461666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.461681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.471530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.471661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.471678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.471686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.471692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.471707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.481476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.481581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.481597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.481605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.481611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.481626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.491658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.491749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.491765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.491773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.491778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.491794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.501706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.501801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.501821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.501829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.501835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.501850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.511689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.511778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.511795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.511803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.511809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.511824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.521589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.521678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.521695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.521703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.521709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.521725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.531652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.531875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.531892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.531900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.531906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.531921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.541808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.541902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.541919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.541927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.541936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.541952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.551810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.551936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.551961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.614 [2024-06-08 21:27:17.551970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.614 [2024-06-08 21:27:17.551977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.614 [2024-06-08 21:27:17.551997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.614 qpair failed and we were unable to recover it. 00:31:39.614 [2024-06-08 21:27:17.561813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.614 [2024-06-08 21:27:17.561907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.614 [2024-06-08 21:27:17.561933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.561942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.561948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.561968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.571862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.571965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.571991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.571999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.572006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.572025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.581896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.581996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.582022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.582031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.582037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.582057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.591996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.592100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.592118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.592126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.592132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.592149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.601907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.601996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.602013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.602021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.602027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.602042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.612037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.612146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.612163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.612170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.612177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.612192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.622009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.622102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.622118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.622126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.622132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.622147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.632020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.632159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.632176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.632184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.632195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.632210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.642029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.642117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.642134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.642142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.642148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.642163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.652105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.652199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.652226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.652235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.652242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.652261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.662116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.662212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.662231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.662239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.662245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.662261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.672079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.615 [2024-06-08 21:27:17.672166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.615 [2024-06-08 21:27:17.672183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.615 [2024-06-08 21:27:17.672191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.615 [2024-06-08 21:27:17.672197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.615 [2024-06-08 21:27:17.672212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.615 qpair failed and we were unable to recover it. 00:31:39.615 [2024-06-08 21:27:17.682131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.616 [2024-06-08 21:27:17.682223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.616 [2024-06-08 21:27:17.682249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.616 [2024-06-08 21:27:17.682257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.616 [2024-06-08 21:27:17.682264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.616 [2024-06-08 21:27:17.682284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.616 qpair failed and we were unable to recover it. 00:31:39.616 [2024-06-08 21:27:17.692182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.616 [2024-06-08 21:27:17.692278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.616 [2024-06-08 21:27:17.692297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.616 [2024-06-08 21:27:17.692304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.616 [2024-06-08 21:27:17.692310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.616 [2024-06-08 21:27:17.692327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.616 qpair failed and we were unable to recover it. 00:31:39.616 [2024-06-08 21:27:17.702231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.616 [2024-06-08 21:27:17.702326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.616 [2024-06-08 21:27:17.702343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.616 [2024-06-08 21:27:17.702350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.616 [2024-06-08 21:27:17.702356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.616 [2024-06-08 21:27:17.702372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.616 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.712211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.712308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.712325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.712333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.712339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.712354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.722253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.722340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.722357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.722372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.722378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.722394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.732302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.732437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.732454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.732462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.732468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.732483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.742334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.742430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.742448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.742455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.742461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.742477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.752198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.752296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.752313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.752320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.752326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.752341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.762323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.762420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.762438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.762445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.762451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.762467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.772340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.772461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.772479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.772486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.772493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.772508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.782337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.782440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.782457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.782465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.782471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.782487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.792449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.792542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.792559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.792566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.792572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.792587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.802352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.802444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.802461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.802468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.802475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.802490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.812527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.879 [2024-06-08 21:27:17.812620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.879 [2024-06-08 21:27:17.812636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.879 [2024-06-08 21:27:17.812647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.879 [2024-06-08 21:27:17.812653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.879 [2024-06-08 21:27:17.812669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.879 qpair failed and we were unable to recover it. 00:31:39.879 [2024-06-08 21:27:17.822550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.822681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.822698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.822706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.822712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.822727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.832550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.832641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.832658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.832665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.832671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.832686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.842581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.842687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.842704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.842711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.842717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.842732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.852640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.852737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.852753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.852761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.852767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.852782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.862685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.862893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.862910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.862917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.862923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.862938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.872685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.872777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.872794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.872801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.872807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.872822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.882688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.882779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.882796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.882803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.882810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.882824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.892769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.892862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.892879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.892886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.892892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.892907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.902817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.902916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.902936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.902943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.902950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.902965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.912772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.912874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.912900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.912909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.912915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.912935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.922863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.922974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.922992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.923000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.923006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.880 [2024-06-08 21:27:17.923022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.880 qpair failed and we were unable to recover it. 00:31:39.880 [2024-06-08 21:27:17.932909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.880 [2024-06-08 21:27:17.933033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.880 [2024-06-08 21:27:17.933059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.880 [2024-06-08 21:27:17.933068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.880 [2024-06-08 21:27:17.933074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.881 [2024-06-08 21:27:17.933094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.881 qpair failed and we were unable to recover it. 00:31:39.881 [2024-06-08 21:27:17.942904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.881 [2024-06-08 21:27:17.943008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.881 [2024-06-08 21:27:17.943033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.881 [2024-06-08 21:27:17.943042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.881 [2024-06-08 21:27:17.943049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.881 [2024-06-08 21:27:17.943073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.881 qpair failed and we were unable to recover it. 00:31:39.881 [2024-06-08 21:27:17.952899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.881 [2024-06-08 21:27:17.952993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.881 [2024-06-08 21:27:17.953019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.881 [2024-06-08 21:27:17.953027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.881 [2024-06-08 21:27:17.953034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.881 [2024-06-08 21:27:17.953054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.881 qpair failed and we were unable to recover it. 00:31:39.881 [2024-06-08 21:27:17.962939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.881 [2024-06-08 21:27:17.963066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.881 [2024-06-08 21:27:17.963092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.881 [2024-06-08 21:27:17.963101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.881 [2024-06-08 21:27:17.963107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:39.881 [2024-06-08 21:27:17.963127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.881 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:17.972970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:17.973095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.143 [2024-06-08 21:27:17.973121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.143 [2024-06-08 21:27:17.973130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.143 [2024-06-08 21:27:17.973137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.143 [2024-06-08 21:27:17.973157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.143 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:17.983032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:17.983124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.143 [2024-06-08 21:27:17.983142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.143 [2024-06-08 21:27:17.983150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.143 [2024-06-08 21:27:17.983157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.143 [2024-06-08 21:27:17.983173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.143 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:17.993004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:17.993096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.143 [2024-06-08 21:27:17.993118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.143 [2024-06-08 21:27:17.993126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.143 [2024-06-08 21:27:17.993132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.143 [2024-06-08 21:27:17.993147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.143 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:18.002913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:18.003000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.143 [2024-06-08 21:27:18.003018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.143 [2024-06-08 21:27:18.003025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.143 [2024-06-08 21:27:18.003031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.143 [2024-06-08 21:27:18.003047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.143 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:18.013071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:18.013162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.143 [2024-06-08 21:27:18.013179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.143 [2024-06-08 21:27:18.013187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.143 [2024-06-08 21:27:18.013193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.143 [2024-06-08 21:27:18.013208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.143 qpair failed and we were unable to recover it. 00:31:40.143 [2024-06-08 21:27:18.023003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.143 [2024-06-08 21:27:18.023139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.023157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.023164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.023171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.023186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.033121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.033213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.033230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.033238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.033244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.033265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.043059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.043169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.043186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.043194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.043201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.043215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.053207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.053298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.053314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.053322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.053328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.053343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.063240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.063353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.063372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.063379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.063385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.063406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.073217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.073316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.073333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.073341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.073347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.073362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.083158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.083249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.083266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.083273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.083279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.083295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.093232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.093312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.093328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.093336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.093342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.093357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.103426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.103550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.103567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.103575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.103581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.103597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.113338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.113435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.113453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.113461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.113467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.113482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.123326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.123420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.123437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.123446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.123456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.123471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.144 [2024-06-08 21:27:18.133405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.144 [2024-06-08 21:27:18.133497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.144 [2024-06-08 21:27:18.133513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.144 [2024-06-08 21:27:18.133521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.144 [2024-06-08 21:27:18.133528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.144 [2024-06-08 21:27:18.133544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.144 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.143412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.143543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.143561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.143569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.143575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.143591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.153438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.153533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.153549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.153557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.153564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.153579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.163466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.163549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.163567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.163575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.163581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.163597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.173656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.173749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.173765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.173773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.173780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.173795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.183497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.183583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.183600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.183608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.183614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.183630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.193541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.193630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.193647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.193654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.193661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.193677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.203561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.203675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.203692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.203700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.203706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.203721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.213660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.213755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.213772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.213783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.213789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.213805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.145 [2024-06-08 21:27:18.223657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.145 [2024-06-08 21:27:18.223744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.145 [2024-06-08 21:27:18.223761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.145 [2024-06-08 21:27:18.223769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.145 [2024-06-08 21:27:18.223776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.145 [2024-06-08 21:27:18.223791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.145 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.233697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.233789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.233806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.233814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.233820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.233836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.243574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.243661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.243678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.243685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.243693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.243708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.253636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.253718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.253735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.253742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.253748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.253763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.263761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.263867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.263884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.263891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.263898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.263914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.273780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.273870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.273887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.273895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.273901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.273916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.283950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.284041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.284057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.284065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.284072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.284086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.293872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.293960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.293977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.293984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.293991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.294006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.303867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.304002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.304019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.304031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.304037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.304052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.313760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.313852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.313869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.313877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.313883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.313898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.323935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.324026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.408 [2024-06-08 21:27:18.324044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.408 [2024-06-08 21:27:18.324052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.408 [2024-06-08 21:27:18.324058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.408 [2024-06-08 21:27:18.324074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.408 qpair failed and we were unable to recover it. 00:31:40.408 [2024-06-08 21:27:18.334059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.408 [2024-06-08 21:27:18.334264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.334291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.334300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.334306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.334326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.344029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.344142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.344161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.344169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.344175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.344191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.354004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.354141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.354159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.354167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.354173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.354188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.364036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.364132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.364151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.364159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.364165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.364181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.374089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.374178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.374195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.374203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.374209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.374226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.384085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.384178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.384204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.384213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.384219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.384240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.394118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.394227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.394250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.394258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.394265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.394282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.404109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.404197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.404214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.404222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.404228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.404243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.414191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.414279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.414296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.414304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.414310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.414326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.424182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.424268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.424285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.424293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.424299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.424315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.434184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.434283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.434301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.434308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.434315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.409 [2024-06-08 21:27:18.434333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.409 qpair failed and we were unable to recover it. 00:31:40.409 [2024-06-08 21:27:18.444154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.409 [2024-06-08 21:27:18.444279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.409 [2024-06-08 21:27:18.444297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.409 [2024-06-08 21:27:18.444305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.409 [2024-06-08 21:27:18.444311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.444326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.410 [2024-06-08 21:27:18.454271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.410 [2024-06-08 21:27:18.454358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.410 [2024-06-08 21:27:18.454376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.410 [2024-06-08 21:27:18.454384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.410 [2024-06-08 21:27:18.454391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.454415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.410 [2024-06-08 21:27:18.464307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.410 [2024-06-08 21:27:18.464408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.410 [2024-06-08 21:27:18.464427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.410 [2024-06-08 21:27:18.464435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.410 [2024-06-08 21:27:18.464441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.464457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.410 [2024-06-08 21:27:18.474336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.410 [2024-06-08 21:27:18.474474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.410 [2024-06-08 21:27:18.474491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.410 [2024-06-08 21:27:18.474499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.410 [2024-06-08 21:27:18.474505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.474521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.410 [2024-06-08 21:27:18.484354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.410 [2024-06-08 21:27:18.484441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.410 [2024-06-08 21:27:18.484465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.410 [2024-06-08 21:27:18.484473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.410 [2024-06-08 21:27:18.484479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.484494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.410 [2024-06-08 21:27:18.494409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.410 [2024-06-08 21:27:18.494499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.410 [2024-06-08 21:27:18.494516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.410 [2024-06-08 21:27:18.494525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.410 [2024-06-08 21:27:18.494531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.410 [2024-06-08 21:27:18.494547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.410 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.504290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.672 [2024-06-08 21:27:18.504377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.672 [2024-06-08 21:27:18.504394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.672 [2024-06-08 21:27:18.504408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.672 [2024-06-08 21:27:18.504415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.672 [2024-06-08 21:27:18.504430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.672 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.514309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.672 [2024-06-08 21:27:18.514400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.672 [2024-06-08 21:27:18.514421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.672 [2024-06-08 21:27:18.514430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.672 [2024-06-08 21:27:18.514438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.672 [2024-06-08 21:27:18.514454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.672 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.524466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.672 [2024-06-08 21:27:18.524563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.672 [2024-06-08 21:27:18.524580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.672 [2024-06-08 21:27:18.524589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.672 [2024-06-08 21:27:18.524595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.672 [2024-06-08 21:27:18.524614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.672 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.534522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.672 [2024-06-08 21:27:18.534617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.672 [2024-06-08 21:27:18.534634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.672 [2024-06-08 21:27:18.534642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.672 [2024-06-08 21:27:18.534649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.672 [2024-06-08 21:27:18.534664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.672 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.544444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.672 [2024-06-08 21:27:18.544578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.672 [2024-06-08 21:27:18.544595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.672 [2024-06-08 21:27:18.544603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.672 [2024-06-08 21:27:18.544609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.672 [2024-06-08 21:27:18.544625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.672 qpair failed and we were unable to recover it. 00:31:40.672 [2024-06-08 21:27:18.554553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.554645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.554662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.554670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.554676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.554691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.564568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.564657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.564674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.564681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.564687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.564703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.574573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.574669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.574690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.574698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.574704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.574720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.584604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.584688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.584705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.584713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.584719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.584734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.594634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.594723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.594741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.594748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.594755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.594770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.604656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.604748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.604765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.604773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.604779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.604794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.614837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.614929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.614946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.614953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.614964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.614979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.624811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.624900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.624917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.624925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.624931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.624946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.634770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.634879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.634896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.634903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.634910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.634924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.644693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.644780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.644798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.644806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.644812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.644828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.654840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.654929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.654947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.654955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.673 [2024-06-08 21:27:18.654961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.673 [2024-06-08 21:27:18.654978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.673 qpair failed and we were unable to recover it. 00:31:40.673 [2024-06-08 21:27:18.664757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.673 [2024-06-08 21:27:18.664946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.673 [2024-06-08 21:27:18.664963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.673 [2024-06-08 21:27:18.664970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.664977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.664992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.674858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.674955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.674972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.674979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.674986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.675001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.684959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.685057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.685074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.685082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.685088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.685103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.694963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.695069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.695095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.695105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.695112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.695132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.705000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.705100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.705125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.705134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.705146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.705166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.714995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.715094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.715120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.715129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.715135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.715156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.725033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.725146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.725164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.725172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.725179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.725195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.735063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.735158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.735175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.735183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.735189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.735206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.745105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.745234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.745252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.745259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.745265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.745282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.674 [2024-06-08 21:27:18.755118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.674 [2024-06-08 21:27:18.755211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.674 [2024-06-08 21:27:18.755228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.674 [2024-06-08 21:27:18.755236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.674 [2024-06-08 21:27:18.755242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.674 [2024-06-08 21:27:18.755257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.674 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.765141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.765240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.765257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.765265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.765271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.765287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.775211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.775301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.775318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.775325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.775333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.775348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.785195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.785293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.785310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.785318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.785324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.785339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.795231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.795326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.795343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.795355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.795361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.795377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.805239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.805325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.805342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.805349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.805356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.805372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.815285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.815376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.815393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.815400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.815412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.815427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.825306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.825398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.825419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.825427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.825434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.825449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.835205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.835296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.937 [2024-06-08 21:27:18.835312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.937 [2024-06-08 21:27:18.835320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.937 [2024-06-08 21:27:18.835327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.937 [2024-06-08 21:27:18.835342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.937 qpair failed and we were unable to recover it. 00:31:40.937 [2024-06-08 21:27:18.845364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.937 [2024-06-08 21:27:18.845451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.845468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.845476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.845482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.845497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.855426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.855512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.855529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.855536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.855543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.855558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.865433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.865518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.865535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.865543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.865549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.865564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.875439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.875531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.875548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.875555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.875561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.875577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.885471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.885562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.885580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.885591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.885597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.885613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.895533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.895624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.895640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.895648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.895654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.895669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.905470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.905563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.905580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.905587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.905594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.905610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.915579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.915672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.915689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.915697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.915704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.915719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.925454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.925542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.925559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.925566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.925573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.925588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.935518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.935610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.935627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.935635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.935641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.935656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.945626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.945711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.945728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.945735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.945742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.938 [2024-06-08 21:27:18.945757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.938 qpair failed and we were unable to recover it. 00:31:40.938 [2024-06-08 21:27:18.955672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.938 [2024-06-08 21:27:18.955765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.938 [2024-06-08 21:27:18.955783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.938 [2024-06-08 21:27:18.955790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.938 [2024-06-08 21:27:18.955796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:18.955812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:18.965734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:18.965863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:18.965880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:18.965889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:18.965895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:18.965910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:18.975763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:18.975852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:18.975873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:18.975881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:18.975887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:18.975902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:18.985738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:18.985828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:18.985845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:18.985853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:18.985859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:18.985874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:18.995766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:18.995860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:18.995877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:18.995886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:18.995892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:18.995907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:19.005790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:19.005888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:19.005905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:19.005912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:19.005919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:19.005934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:19.015864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:19.015952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:19.015970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:19.015977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:19.015984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:19.016003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 [2024-06-08 21:27:19.025822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.939 [2024-06-08 21:27:19.025917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.939 [2024-06-08 21:27:19.025943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.939 [2024-06-08 21:27:19.025952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.939 [2024-06-08 21:27:19.025959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f050c000b90 00:31:40.939 [2024-06-08 21:27:19.025979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.939 qpair failed and we were unable to recover it. 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Read completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 Write completed with error (sct=0, sc=8) 00:31:40.939 starting I/O failed 00:31:40.939 [2024-06-08 21:27:19.026892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.201 [2024-06-08 21:27:19.035967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.201 [2024-06-08 21:27:19.036202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.201 [2024-06-08 21:27:19.036272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.201 [2024-06-08 21:27:19.036297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.201 [2024-06-08 21:27:19.036316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f051c000b90 00:31:41.201 [2024-06-08 21:27:19.036370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.202 qpair failed and we were unable to recover it. 00:31:41.202 [2024-06-08 21:27:19.046038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.202 [2024-06-08 21:27:19.046263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.202 [2024-06-08 21:27:19.046312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.202 [2024-06-08 21:27:19.046334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.202 [2024-06-08 21:27:19.046353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f051c000b90 00:31:41.202 [2024-06-08 21:27:19.046398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:41.202 qpair failed and we were unable to recover it. 00:31:41.202 [2024-06-08 21:27:19.046605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9feb50 is same with the state(5) to be set 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 [2024-06-08 21:27:19.046958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.202 [2024-06-08 21:27:19.055901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.202 [2024-06-08 21:27:19.056025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.202 [2024-06-08 21:27:19.056042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.202 [2024-06-08 21:27:19.056048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.202 [2024-06-08 21:27:19.056052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0514000b90 00:31:41.202 [2024-06-08 21:27:19.056066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.202 qpair failed and we were unable to recover it. 00:31:41.202 [2024-06-08 21:27:19.065949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.202 [2024-06-08 21:27:19.066024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.202 [2024-06-08 21:27:19.066043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.202 [2024-06-08 21:27:19.066050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.202 [2024-06-08 21:27:19.066055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0514000b90 00:31:41.202 [2024-06-08 21:27:19.066069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.202 qpair failed and we were unable to recover it. 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Write completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 Read completed with error (sct=0, sc=8) 00:31:41.202 starting I/O failed 00:31:41.202 [2024-06-08 21:27:19.066472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:41.202 [2024-06-08 21:27:19.075876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.202 [2024-06-08 21:27:19.075972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.202 [2024-06-08 21:27:19.075993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.202 [2024-06-08 21:27:19.076003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.202 [2024-06-08 21:27:19.076009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9ffd90 00:31:41.203 [2024-06-08 21:27:19.076026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:41.203 qpair failed and we were unable to recover it. 00:31:41.203 [2024-06-08 21:27:19.085964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.203 [2024-06-08 21:27:19.086054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.203 [2024-06-08 21:27:19.086071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.203 [2024-06-08 21:27:19.086078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.203 [2024-06-08 21:27:19.086085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9ffd90 00:31:41.203 [2024-06-08 21:27:19.086101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:41.203 qpair failed and we were unable to recover it. 00:31:41.203 [2024-06-08 21:27:19.086457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9feb50 (9): Bad file descriptor 00:31:41.203 Initializing NVMe Controllers 00:31:41.203 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:41.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:41.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:41.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:41.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:41.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:41.203 Initialization complete. Launching workers. 00:31:41.203 Starting thread on core 1 00:31:41.203 Starting thread on core 2 00:31:41.203 Starting thread on core 3 00:31:41.203 Starting thread on core 0 00:31:41.203 21:27:19 -- host/target_disconnect.sh@59 -- # sync 00:31:41.203 00:31:41.203 real 0m11.272s 00:31:41.203 user 0m20.490s 00:31:41.203 sys 0m4.447s 00:31:41.203 21:27:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.203 21:27:19 -- common/autotest_common.sh@10 -- # set +x 00:31:41.203 ************************************ 00:31:41.203 END TEST nvmf_target_disconnect_tc2 00:31:41.203 ************************************ 00:31:41.203 21:27:19 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:41.203 21:27:19 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:41.203 21:27:19 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:41.203 21:27:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:41.203 21:27:19 -- nvmf/common.sh@116 -- # sync 00:31:41.203 21:27:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:41.203 21:27:19 -- nvmf/common.sh@119 -- # set +e 00:31:41.203 21:27:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:41.203 21:27:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:41.203 rmmod nvme_tcp 00:31:41.203 rmmod nvme_fabrics 00:31:41.203 rmmod nvme_keyring 00:31:41.203 21:27:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:41.203 21:27:19 -- nvmf/common.sh@123 -- # set -e 00:31:41.203 21:27:19 -- nvmf/common.sh@124 -- # return 0 00:31:41.203 21:27:19 -- nvmf/common.sh@477 -- # '[' -n 2584707 ']' 00:31:41.203 21:27:19 -- nvmf/common.sh@478 -- # killprocess 2584707 00:31:41.203 21:27:19 -- common/autotest_common.sh@926 -- # '[' -z 2584707 ']' 00:31:41.203 21:27:19 -- common/autotest_common.sh@930 -- # kill -0 2584707 00:31:41.203 21:27:19 -- common/autotest_common.sh@931 -- # uname 00:31:41.203 21:27:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.203 21:27:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2584707 00:31:41.203 21:27:19 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:31:41.203 21:27:19 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:31:41.203 21:27:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2584707' 00:31:41.203 killing process with pid 2584707 00:31:41.203 21:27:19 -- common/autotest_common.sh@945 -- # kill 2584707 00:31:41.203 21:27:19 -- common/autotest_common.sh@950 -- # wait 2584707 00:31:41.464 21:27:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:41.464 21:27:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:41.464 21:27:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:41.464 21:27:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:41.464 21:27:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:41.464 21:27:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.464 21:27:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.464 21:27:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.378 21:27:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:43.378 00:31:43.378 real 0m21.028s 00:31:43.378 user 0m47.758s 00:31:43.378 sys 0m10.030s 00:31:43.378 21:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.378 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.378 ************************************ 00:31:43.378 END TEST nvmf_target_disconnect 00:31:43.378 ************************************ 00:31:43.640 21:27:21 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:43.640 21:27:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:43.640 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.640 21:27:21 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:43.640 00:31:43.640 real 24m22.907s 00:31:43.640 user 64m37.214s 00:31:43.640 sys 6m42.617s 00:31:43.640 21:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.640 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.640 ************************************ 00:31:43.640 END TEST nvmf_tcp 00:31:43.640 ************************************ 00:31:43.640 21:27:21 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:31:43.640 21:27:21 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:43.640 21:27:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:43.640 21:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:43.640 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.640 ************************************ 00:31:43.640 START TEST spdkcli_nvmf_tcp 00:31:43.640 ************************************ 00:31:43.640 21:27:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:43.640 * Looking for test storage... 00:31:43.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:43.640 21:27:21 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:43.640 21:27:21 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.640 21:27:21 -- nvmf/common.sh@7 -- # uname -s 00:31:43.640 21:27:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.640 21:27:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.640 21:27:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.640 21:27:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.640 21:27:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.640 21:27:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.640 21:27:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.640 21:27:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.640 21:27:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.640 21:27:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.640 21:27:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:43.640 21:27:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:43.640 21:27:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.640 21:27:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.640 21:27:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.640 21:27:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.640 21:27:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.640 21:27:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.640 21:27:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.640 21:27:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.640 21:27:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.640 21:27:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.640 21:27:21 -- paths/export.sh@5 -- # export PATH 00:31:43.640 21:27:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.640 21:27:21 -- nvmf/common.sh@46 -- # : 0 00:31:43.640 21:27:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:43.640 21:27:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:43.640 21:27:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:43.640 21:27:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.640 21:27:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.640 21:27:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:43.640 21:27:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:43.640 21:27:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:43.640 21:27:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:43.640 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.640 21:27:21 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:43.640 21:27:21 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2586548 00:31:43.640 21:27:21 -- spdkcli/common.sh@34 -- # waitforlisten 2586548 00:31:43.640 21:27:21 -- common/autotest_common.sh@819 -- # '[' -z 2586548 ']' 00:31:43.640 21:27:21 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:43.640 21:27:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.640 21:27:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:43.640 21:27:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.640 21:27:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:43.640 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:31:43.902 [2024-06-08 21:27:21.762439] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:43.902 [2024-06-08 21:27:21.762498] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586548 ] 00:31:43.902 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.902 [2024-06-08 21:27:21.821130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:43.902 [2024-06-08 21:27:21.885214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.902 [2024-06-08 21:27:21.885459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.902 [2024-06-08 21:27:21.885472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.474 21:27:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:44.474 21:27:22 -- common/autotest_common.sh@852 -- # return 0 00:31:44.474 21:27:22 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:44.474 21:27:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:44.474 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:31:44.474 21:27:22 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:44.475 21:27:22 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:44.475 21:27:22 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:44.475 21:27:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:44.475 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:31:44.475 21:27:22 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:44.475 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:44.475 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:44.475 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:44.475 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:44.475 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:44.475 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:44.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.475 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:44.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:44.475 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:44.475 ' 00:31:45.048 [2024-06-08 21:27:22.885881] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:46.958 [2024-06-08 21:27:24.884269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.340 [2024-06-08 21:27:26.048091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:50.252 [2024-06-08 21:27:28.186414] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:52.173 [2024-06-08 21:27:30.019955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:53.557 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:53.557 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:53.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:53.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:53.557 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:53.557 21:27:31 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:53.557 21:27:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:53.557 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:53.557 21:27:31 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:53.557 21:27:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:53.557 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:31:53.557 21:27:31 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:53.557 21:27:31 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:54.128 21:27:31 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:54.128 21:27:32 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:54.128 21:27:32 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:54.128 21:27:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:54.128 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:31:54.128 21:27:32 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:54.128 21:27:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:54.128 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:31:54.128 21:27:32 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:54.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:54.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:54.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:54.128 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:54.128 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:54.128 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:54.128 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:54.128 ' 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:59.416 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:59.416 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:59.416 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:59.416 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:59.416 21:27:36 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:59.416 21:27:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:59.416 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:31:59.416 21:27:36 -- spdkcli/nvmf.sh@90 -- # killprocess 2586548 00:31:59.416 21:27:36 -- common/autotest_common.sh@926 -- # '[' -z 2586548 ']' 00:31:59.416 21:27:36 -- common/autotest_common.sh@930 -- # kill -0 2586548 00:31:59.416 21:27:36 -- common/autotest_common.sh@931 -- # uname 00:31:59.416 21:27:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:59.416 21:27:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2586548 00:31:59.416 21:27:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:59.416 21:27:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:59.416 21:27:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2586548' 00:31:59.416 killing process with pid 2586548 00:31:59.416 21:27:37 -- common/autotest_common.sh@945 -- # kill 2586548 00:31:59.416 [2024-06-08 21:27:37.015925] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:59.416 21:27:37 -- common/autotest_common.sh@950 -- # wait 2586548 00:31:59.416 21:27:37 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:59.416 21:27:37 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:59.416 21:27:37 -- spdkcli/common.sh@13 -- # '[' -n 2586548 ']' 00:31:59.416 21:27:37 -- spdkcli/common.sh@14 -- # killprocess 2586548 00:31:59.416 21:27:37 -- common/autotest_common.sh@926 -- # '[' -z 2586548 ']' 00:31:59.416 21:27:37 -- common/autotest_common.sh@930 -- # kill -0 2586548 00:31:59.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2586548) - No such process 00:31:59.416 21:27:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2586548 is not found' 00:31:59.416 Process with pid 2586548 is not found 00:31:59.416 21:27:37 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:59.416 21:27:37 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:59.416 21:27:37 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:59.416 00:31:59.416 real 0m15.557s 00:31:59.416 user 0m32.064s 00:31:59.416 sys 0m0.687s 00:31:59.417 21:27:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.417 21:27:37 -- common/autotest_common.sh@10 -- # set +x 00:31:59.417 ************************************ 00:31:59.417 END TEST spdkcli_nvmf_tcp 00:31:59.417 ************************************ 00:31:59.417 21:27:37 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:59.417 21:27:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:59.417 21:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:59.417 21:27:37 -- common/autotest_common.sh@10 -- # set +x 00:31:59.417 ************************************ 00:31:59.417 START TEST nvmf_identify_passthru 00:31:59.417 ************************************ 00:31:59.417 21:27:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:59.417 * Looking for test storage... 00:31:59.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.417 21:27:37 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.417 21:27:37 -- nvmf/common.sh@7 -- # uname -s 00:31:59.417 21:27:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.417 21:27:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.417 21:27:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.417 21:27:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.417 21:27:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.417 21:27:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.417 21:27:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.417 21:27:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.417 21:27:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.417 21:27:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.417 21:27:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:59.417 21:27:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:59.417 21:27:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.417 21:27:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.417 21:27:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.417 21:27:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.417 21:27:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.417 21:27:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.417 21:27:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.417 21:27:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@5 -- # export PATH 00:31:59.417 21:27:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- nvmf/common.sh@46 -- # : 0 00:31:59.417 21:27:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:59.417 21:27:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:59.417 21:27:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:59.417 21:27:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.417 21:27:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.417 21:27:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:59.417 21:27:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:59.417 21:27:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:59.417 21:27:37 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.417 21:27:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.417 21:27:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.417 21:27:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.417 21:27:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- paths/export.sh@5 -- # export PATH 00:31:59.417 21:27:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.417 21:27:37 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:59.417 21:27:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:59.417 21:27:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.417 21:27:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:59.417 21:27:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:59.417 21:27:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:59.417 21:27:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.417 21:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.417 21:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.417 21:27:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:59.417 21:27:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:59.417 21:27:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:59.417 21:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:06.051 21:27:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:06.051 21:27:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:06.051 21:27:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:06.051 21:27:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:06.051 21:27:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:06.051 21:27:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:06.051 21:27:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:06.051 21:27:43 -- nvmf/common.sh@294 -- # net_devs=() 00:32:06.051 21:27:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:06.051 21:27:43 -- nvmf/common.sh@295 -- # e810=() 00:32:06.051 21:27:43 -- nvmf/common.sh@295 -- # local -ga e810 00:32:06.051 21:27:43 -- nvmf/common.sh@296 -- # x722=() 00:32:06.051 21:27:43 -- nvmf/common.sh@296 -- # local -ga x722 00:32:06.051 21:27:43 -- nvmf/common.sh@297 -- # mlx=() 00:32:06.051 21:27:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:06.051 21:27:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.051 21:27:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:06.051 21:27:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:06.051 21:27:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:06.051 21:27:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.051 21:27:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:06.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:06.051 21:27:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.051 21:27:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:06.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:06.051 21:27:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.051 21:27:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:06.052 21:27:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.052 21:27:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.052 21:27:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.052 21:27:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.052 21:27:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:06.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:06.052 21:27:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.052 21:27:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.052 21:27:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.052 21:27:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.052 21:27:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.052 21:27:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:06.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:06.052 21:27:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.052 21:27:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:06.052 21:27:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:06.052 21:27:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:06.052 21:27:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:06.052 21:27:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.052 21:27:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.052 21:27:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.052 21:27:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:06.052 21:27:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.052 21:27:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.052 21:27:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:06.052 21:27:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.052 21:27:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.052 21:27:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:06.052 21:27:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:06.052 21:27:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.052 21:27:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.052 21:27:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.052 21:27:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.052 21:27:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:06.052 21:27:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.313 21:27:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.313 21:27:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.313 21:27:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:06.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:32:06.313 00:32:06.313 --- 10.0.0.2 ping statistics --- 00:32:06.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.313 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:32:06.313 21:27:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:32:06.313 00:32:06.313 --- 10.0.0.1 ping statistics --- 00:32:06.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.313 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:32:06.313 21:27:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.313 21:27:44 -- nvmf/common.sh@410 -- # return 0 00:32:06.313 21:27:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:06.313 21:27:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.313 21:27:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:06.313 21:27:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:06.313 21:27:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.313 21:27:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:06.313 21:27:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:06.313 21:27:44 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:06.313 21:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:06.313 21:27:44 -- common/autotest_common.sh@10 -- # set +x 00:32:06.313 21:27:44 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:06.313 21:27:44 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:06.313 21:27:44 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:06.313 21:27:44 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:06.313 21:27:44 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:06.313 21:27:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:06.313 21:27:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:06.313 21:27:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:06.313 21:27:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:06.313 21:27:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:06.313 21:27:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:06.313 21:27:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:32:06.313 21:27:44 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:32:06.313 21:27:44 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:32:06.313 21:27:44 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:32:06.313 21:27:44 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:06.313 21:27:44 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:06.313 21:27:44 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:06.574 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.833 21:27:44 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:32:06.833 21:27:44 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:06.833 21:27:44 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:06.833 21:27:44 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:06.833 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.403 21:27:45 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:32:07.403 21:27:45 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:07.403 21:27:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:07.403 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:07.403 21:27:45 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:07.403 21:27:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:07.403 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:07.403 21:27:45 -- target/identify_passthru.sh@31 -- # nvmfpid=2593345 00:32:07.403 21:27:45 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:07.403 21:27:45 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:07.403 21:27:45 -- target/identify_passthru.sh@35 -- # waitforlisten 2593345 00:32:07.403 21:27:45 -- common/autotest_common.sh@819 -- # '[' -z 2593345 ']' 00:32:07.403 21:27:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.403 21:27:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:07.403 21:27:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.403 21:27:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:07.403 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:07.403 [2024-06-08 21:27:45.409241] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:07.403 [2024-06-08 21:27:45.409291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.403 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.403 [2024-06-08 21:27:45.473530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.665 [2024-06-08 21:27:45.540116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:07.665 [2024-06-08 21:27:45.540246] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.665 [2024-06-08 21:27:45.540256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.665 [2024-06-08 21:27:45.540264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.665 [2024-06-08 21:27:45.540426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.665 [2024-06-08 21:27:45.540513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.665 [2024-06-08 21:27:45.540769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.665 [2024-06-08 21:27:45.540770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.237 21:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:08.237 21:27:46 -- common/autotest_common.sh@852 -- # return 0 00:32:08.237 21:27:46 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:08.237 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.237 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.237 INFO: Log level set to 20 00:32:08.237 INFO: Requests: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "method": "nvmf_set_config", 00:32:08.237 "id": 1, 00:32:08.237 "params": { 00:32:08.237 "admin_cmd_passthru": { 00:32:08.237 "identify_ctrlr": true 00:32:08.237 } 00:32:08.237 } 00:32:08.237 } 00:32:08.237 00:32:08.237 INFO: response: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "id": 1, 00:32:08.237 "result": true 00:32:08.237 } 00:32:08.237 00:32:08.237 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.237 21:27:46 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:08.237 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.237 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.237 INFO: Setting log level to 20 00:32:08.237 INFO: Setting log level to 20 00:32:08.237 INFO: Log level set to 20 00:32:08.237 INFO: Log level set to 20 00:32:08.237 INFO: Requests: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "method": "framework_start_init", 00:32:08.237 "id": 1 00:32:08.237 } 00:32:08.237 00:32:08.237 INFO: Requests: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "method": "framework_start_init", 00:32:08.237 "id": 1 00:32:08.237 } 00:32:08.237 00:32:08.237 [2024-06-08 21:27:46.256823] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:08.237 INFO: response: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "id": 1, 00:32:08.237 "result": true 00:32:08.237 } 00:32:08.237 00:32:08.237 INFO: response: 00:32:08.237 { 00:32:08.237 "jsonrpc": "2.0", 00:32:08.237 "id": 1, 00:32:08.237 "result": true 00:32:08.237 } 00:32:08.237 00:32:08.237 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.237 21:27:46 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.237 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.237 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.237 INFO: Setting log level to 40 00:32:08.237 INFO: Setting log level to 40 00:32:08.237 INFO: Setting log level to 40 00:32:08.237 [2024-06-08 21:27:46.270078] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.237 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.237 21:27:46 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:08.237 21:27:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:08.237 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.237 21:27:46 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:08.237 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.237 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.810 Nvme0n1 00:32:08.810 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.810 21:27:46 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:08.810 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.810 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.810 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.810 21:27:46 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:08.810 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.810 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.810 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.810 21:27:46 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.810 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.810 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.810 [2024-06-08 21:27:46.653747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.810 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.810 21:27:46 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:08.810 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.810 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:08.810 [2024-06-08 21:27:46.665516] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:08.810 [ 00:32:08.810 { 00:32:08.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:08.810 "subtype": "Discovery", 00:32:08.810 "listen_addresses": [], 00:32:08.810 "allow_any_host": true, 00:32:08.810 "hosts": [] 00:32:08.810 }, 00:32:08.810 { 00:32:08.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.810 "subtype": "NVMe", 00:32:08.810 "listen_addresses": [ 00:32:08.810 { 00:32:08.810 "transport": "TCP", 00:32:08.810 "trtype": "TCP", 00:32:08.810 "adrfam": "IPv4", 00:32:08.810 "traddr": "10.0.0.2", 00:32:08.810 "trsvcid": "4420" 00:32:08.810 } 00:32:08.810 ], 00:32:08.810 "allow_any_host": true, 00:32:08.810 "hosts": [], 00:32:08.810 "serial_number": "SPDK00000000000001", 00:32:08.810 "model_number": "SPDK bdev Controller", 00:32:08.810 "max_namespaces": 1, 00:32:08.810 "min_cntlid": 1, 00:32:08.810 "max_cntlid": 65519, 00:32:08.810 "namespaces": [ 00:32:08.810 { 00:32:08.810 "nsid": 1, 00:32:08.810 "bdev_name": "Nvme0n1", 00:32:08.810 "name": "Nvme0n1", 00:32:08.810 "nguid": "3634473052605487002538450000003E", 00:32:08.810 "uuid": "36344730-5260-5487-0025-38450000003e" 00:32:08.810 } 00:32:08.810 ] 00:32:08.810 } 00:32:08.810 ] 00:32:08.810 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.810 21:27:46 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:08.810 21:27:46 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:08.810 21:27:46 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:08.810 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.810 21:27:46 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:32:08.810 21:27:46 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:08.810 21:27:46 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:08.810 21:27:46 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:08.810 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.072 21:27:46 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:32:09.072 21:27:46 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:32:09.072 21:27:46 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:32:09.072 21:27:46 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.072 21:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.072 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:09.072 21:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.072 21:27:46 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:09.072 21:27:46 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:09.072 21:27:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:09.072 21:27:46 -- nvmf/common.sh@116 -- # sync 00:32:09.072 21:27:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:09.072 21:27:46 -- nvmf/common.sh@119 -- # set +e 00:32:09.072 21:27:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:09.072 21:27:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:09.072 rmmod nvme_tcp 00:32:09.072 rmmod nvme_fabrics 00:32:09.072 rmmod nvme_keyring 00:32:09.072 21:27:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:09.072 21:27:46 -- nvmf/common.sh@123 -- # set -e 00:32:09.072 21:27:46 -- nvmf/common.sh@124 -- # return 0 00:32:09.072 21:27:46 -- nvmf/common.sh@477 -- # '[' -n 2593345 ']' 00:32:09.072 21:27:46 -- nvmf/common.sh@478 -- # killprocess 2593345 00:32:09.072 21:27:46 -- common/autotest_common.sh@926 -- # '[' -z 2593345 ']' 00:32:09.072 21:27:46 -- common/autotest_common.sh@930 -- # kill -0 2593345 00:32:09.072 21:27:46 -- common/autotest_common.sh@931 -- # uname 00:32:09.072 21:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:09.072 21:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2593345 00:32:09.072 21:27:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:09.072 21:27:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:09.072 21:27:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2593345' 00:32:09.072 killing process with pid 2593345 00:32:09.072 21:27:47 -- common/autotest_common.sh@945 -- # kill 2593345 00:32:09.072 [2024-06-08 21:27:47.040932] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:09.072 21:27:47 -- common/autotest_common.sh@950 -- # wait 2593345 00:32:09.333 21:27:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:09.333 21:27:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:09.333 21:27:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:09.333 21:27:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:09.333 21:27:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:09.333 21:27:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.333 21:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:09.333 21:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.882 21:27:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:11.882 00:32:11.882 real 0m12.177s 00:32:11.882 user 0m9.272s 00:32:11.882 sys 0m5.831s 00:32:11.882 21:27:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.882 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:11.882 ************************************ 00:32:11.882 END TEST nvmf_identify_passthru 00:32:11.882 ************************************ 00:32:11.882 21:27:49 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:11.882 21:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:11.882 21:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.882 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:11.882 ************************************ 00:32:11.882 START TEST nvmf_dif 00:32:11.882 ************************************ 00:32:11.882 21:27:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:11.882 * Looking for test storage... 00:32:11.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.882 21:27:49 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.882 21:27:49 -- nvmf/common.sh@7 -- # uname -s 00:32:11.882 21:27:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.882 21:27:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.882 21:27:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.882 21:27:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.882 21:27:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.882 21:27:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.882 21:27:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.882 21:27:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.882 21:27:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.882 21:27:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.882 21:27:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.882 21:27:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.882 21:27:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.882 21:27:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.882 21:27:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.882 21:27:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.882 21:27:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.882 21:27:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.883 21:27:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.883 21:27:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.883 21:27:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.883 21:27:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.883 21:27:49 -- paths/export.sh@5 -- # export PATH 00:32:11.883 21:27:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.883 21:27:49 -- nvmf/common.sh@46 -- # : 0 00:32:11.883 21:27:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:11.883 21:27:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:11.883 21:27:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:11.883 21:27:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.883 21:27:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.883 21:27:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:11.883 21:27:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:11.883 21:27:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:11.883 21:27:49 -- target/dif.sh@15 -- # NULL_META=16 00:32:11.883 21:27:49 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:11.883 21:27:49 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:11.883 21:27:49 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:11.883 21:27:49 -- target/dif.sh@135 -- # nvmftestinit 00:32:11.883 21:27:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:11.883 21:27:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.883 21:27:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:11.883 21:27:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:11.883 21:27:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:11.883 21:27:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.883 21:27:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:11.883 21:27:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.883 21:27:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:11.883 21:27:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:11.883 21:27:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:11.883 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:18.472 21:27:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:18.472 21:27:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:18.472 21:27:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:18.472 21:27:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:18.472 21:27:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:18.472 21:27:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:18.472 21:27:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:18.472 21:27:56 -- nvmf/common.sh@294 -- # net_devs=() 00:32:18.472 21:27:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:18.472 21:27:56 -- nvmf/common.sh@295 -- # e810=() 00:32:18.472 21:27:56 -- nvmf/common.sh@295 -- # local -ga e810 00:32:18.472 21:27:56 -- nvmf/common.sh@296 -- # x722=() 00:32:18.472 21:27:56 -- nvmf/common.sh@296 -- # local -ga x722 00:32:18.472 21:27:56 -- nvmf/common.sh@297 -- # mlx=() 00:32:18.472 21:27:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:18.472 21:27:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.472 21:27:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:18.472 21:27:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:18.472 21:27:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:18.472 21:27:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:18.472 21:27:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:18.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:18.472 21:27:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:18.472 21:27:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:18.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:18.472 21:27:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:18.472 21:27:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:18.472 21:27:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:18.472 21:27:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.472 21:27:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:18.472 21:27:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.472 21:27:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:18.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:18.472 21:27:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.472 21:27:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:18.472 21:27:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.472 21:27:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:18.473 21:27:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.473 21:27:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:18.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:18.473 21:27:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.473 21:27:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:18.473 21:27:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:18.473 21:27:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:18.473 21:27:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:18.473 21:27:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:18.473 21:27:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.473 21:27:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.473 21:27:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.473 21:27:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:18.473 21:27:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.473 21:27:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.473 21:27:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:18.473 21:27:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.473 21:27:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.473 21:27:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:18.473 21:27:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:18.473 21:27:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.473 21:27:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.473 21:27:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.473 21:27:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.473 21:27:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:18.473 21:27:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.473 21:27:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.473 21:27:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.473 21:27:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:18.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:32:18.473 00:32:18.473 --- 10.0.0.2 ping statistics --- 00:32:18.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.473 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:32:18.473 21:27:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:32:18.473 00:32:18.473 --- 10.0.0.1 ping statistics --- 00:32:18.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.473 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:32:18.473 21:27:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.473 21:27:56 -- nvmf/common.sh@410 -- # return 0 00:32:18.473 21:27:56 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:18.473 21:27:56 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:21.778 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:21.778 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:21.778 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:22.351 21:28:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.351 21:28:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:22.351 21:28:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:22.351 21:28:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.351 21:28:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:22.351 21:28:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:22.351 21:28:00 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:22.351 21:28:00 -- target/dif.sh@137 -- # nvmfappstart 00:32:22.351 21:28:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:22.351 21:28:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:22.351 21:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:22.351 21:28:00 -- nvmf/common.sh@469 -- # nvmfpid=2599312 00:32:22.351 21:28:00 -- nvmf/common.sh@470 -- # waitforlisten 2599312 00:32:22.351 21:28:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:22.351 21:28:00 -- common/autotest_common.sh@819 -- # '[' -z 2599312 ']' 00:32:22.351 21:28:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.351 21:28:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:22.351 21:28:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.351 21:28:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:22.351 21:28:00 -- common/autotest_common.sh@10 -- # set +x 00:32:22.351 [2024-06-08 21:28:00.249537] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:22.351 [2024-06-08 21:28:00.249600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.351 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.351 [2024-06-08 21:28:00.319181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.351 [2024-06-08 21:28:00.392072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:22.351 [2024-06-08 21:28:00.392193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.351 [2024-06-08 21:28:00.392202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.351 [2024-06-08 21:28:00.392214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.351 [2024-06-08 21:28:00.392233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.923 21:28:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:22.923 21:28:01 -- common/autotest_common.sh@852 -- # return 0 00:32:22.923 21:28:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:22.923 21:28:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:22.923 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 21:28:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.184 21:28:01 -- target/dif.sh@139 -- # create_transport 00:32:23.184 21:28:01 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:23.184 21:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 [2024-06-08 21:28:01.051241] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.184 21:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.184 21:28:01 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:23.184 21:28:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:23.184 21:28:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 ************************************ 00:32:23.184 START TEST fio_dif_1_default 00:32:23.184 ************************************ 00:32:23.184 21:28:01 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:32:23.184 21:28:01 -- target/dif.sh@86 -- # create_subsystems 0 00:32:23.184 21:28:01 -- target/dif.sh@28 -- # local sub 00:32:23.184 21:28:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.184 21:28:01 -- target/dif.sh@31 -- # create_subsystem 0 00:32:23.184 21:28:01 -- target/dif.sh@18 -- # local sub_id=0 00:32:23.184 21:28:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:23.184 21:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 bdev_null0 00:32:23.184 21:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.184 21:28:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:23.184 21:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 21:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.184 21:28:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:23.184 21:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 21:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.184 21:28:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.184 21:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.184 21:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:23.184 [2024-06-08 21:28:01.107530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.184 21:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.184 21:28:01 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:23.184 21:28:01 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:23.185 21:28:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:23.185 21:28:01 -- nvmf/common.sh@520 -- # config=() 00:32:23.185 21:28:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.185 21:28:01 -- nvmf/common.sh@520 -- # local subsystem config 00:32:23.185 21:28:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.185 21:28:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:23.185 21:28:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:23.185 { 00:32:23.185 "params": { 00:32:23.185 "name": "Nvme$subsystem", 00:32:23.185 "trtype": "$TEST_TRANSPORT", 00:32:23.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.185 "adrfam": "ipv4", 00:32:23.185 "trsvcid": "$NVMF_PORT", 00:32:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.185 "hdgst": ${hdgst:-false}, 00:32:23.185 "ddgst": ${ddgst:-false} 00:32:23.185 }, 00:32:23.185 "method": "bdev_nvme_attach_controller" 00:32:23.185 } 00:32:23.185 EOF 00:32:23.185 )") 00:32:23.185 21:28:01 -- target/dif.sh@82 -- # gen_fio_conf 00:32:23.185 21:28:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:23.185 21:28:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:23.185 21:28:01 -- target/dif.sh@54 -- # local file 00:32:23.185 21:28:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:23.185 21:28:01 -- target/dif.sh@56 -- # cat 00:32:23.185 21:28:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.185 21:28:01 -- common/autotest_common.sh@1320 -- # shift 00:32:23.185 21:28:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:23.185 21:28:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.185 21:28:01 -- nvmf/common.sh@542 -- # cat 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.185 21:28:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:23.185 21:28:01 -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:23.185 21:28:01 -- nvmf/common.sh@544 -- # jq . 00:32:23.185 21:28:01 -- nvmf/common.sh@545 -- # IFS=, 00:32:23.185 21:28:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:23.185 "params": { 00:32:23.185 "name": "Nvme0", 00:32:23.185 "trtype": "tcp", 00:32:23.185 "traddr": "10.0.0.2", 00:32:23.185 "adrfam": "ipv4", 00:32:23.185 "trsvcid": "4420", 00:32:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.185 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.185 "hdgst": false, 00:32:23.185 "ddgst": false 00:32:23.185 }, 00:32:23.185 "method": "bdev_nvme_attach_controller" 00:32:23.185 }' 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:23.185 21:28:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:23.185 21:28:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:23.185 21:28:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:23.185 21:28:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:23.185 21:28:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:23.185 21:28:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.445 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:23.445 fio-3.35 00:32:23.445 Starting 1 thread 00:32:23.705 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.967 [2024-06-08 21:28:02.016579] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:23.967 [2024-06-08 21:28:02.016626] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:36.199 00:32:36.199 filename0: (groupid=0, jobs=1): err= 0: pid=2599847: Sat Jun 8 21:28:12 2024 00:32:36.199 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:32:36.199 slat (nsec): min=5333, max=30841, avg=6137.34, stdev=1520.60 00:32:36.199 clat (usec): min=41096, max=43779, avg=41994.92, stdev=164.43 00:32:36.199 lat (usec): min=41101, max=43810, avg=42001.05, stdev=164.92 00:32:36.199 clat percentiles (usec): 00:32:36.199 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:36.199 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:36.199 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:36.199 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:32:36.199 | 99.99th=[43779] 00:32:36.199 bw ( KiB/s): min= 352, max= 384, per=99.78%, avg=380.80, stdev= 9.85, samples=20 00:32:36.199 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:36.199 lat (msec) : 50=100.00% 00:32:36.199 cpu : usr=95.93%, sys=3.87%, ctx=13, majf=0, minf=223 00:32:36.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.199 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:36.200 00:32:36.200 Run status group 0 (all jobs): 00:32:36.200 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10041-10041msec 00:32:36.200 21:28:12 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:36.200 21:28:12 -- target/dif.sh@43 -- # local sub 00:32:36.200 21:28:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:36.200 21:28:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:36.200 21:28:12 -- target/dif.sh@36 -- # local sub_id=0 00:32:36.200 21:28:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 00:32:36.200 real 0m11.268s 00:32:36.200 user 0m25.473s 00:32:36.200 sys 0m0.705s 00:32:36.200 21:28:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 ************************************ 00:32:36.200 END TEST fio_dif_1_default 00:32:36.200 ************************************ 00:32:36.200 21:28:12 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:36.200 21:28:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:36.200 21:28:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 ************************************ 00:32:36.200 START TEST fio_dif_1_multi_subsystems 00:32:36.200 ************************************ 00:32:36.200 21:28:12 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:32:36.200 21:28:12 -- target/dif.sh@92 -- # local files=1 00:32:36.200 21:28:12 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:36.200 21:28:12 -- target/dif.sh@28 -- # local sub 00:32:36.200 21:28:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:36.200 21:28:12 -- target/dif.sh@31 -- # create_subsystem 0 00:32:36.200 21:28:12 -- target/dif.sh@18 -- # local sub_id=0 00:32:36.200 21:28:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 bdev_null0 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 [2024-06-08 21:28:12.419595] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@30 -- # for sub in "$@" 00:32:36.200 21:28:12 -- target/dif.sh@31 -- # create_subsystem 1 00:32:36.200 21:28:12 -- target/dif.sh@18 -- # local sub_id=1 00:32:36.200 21:28:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 bdev_null1 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.200 21:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.200 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 21:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.200 21:28:12 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:36.200 21:28:12 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:36.200 21:28:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:36.200 21:28:12 -- nvmf/common.sh@520 -- # config=() 00:32:36.200 21:28:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.200 21:28:12 -- nvmf/common.sh@520 -- # local subsystem config 00:32:36.200 21:28:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:36.200 21:28:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.200 21:28:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:36.200 { 00:32:36.200 "params": { 00:32:36.200 "name": "Nvme$subsystem", 00:32:36.200 "trtype": "$TEST_TRANSPORT", 00:32:36.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:36.200 "adrfam": "ipv4", 00:32:36.200 "trsvcid": "$NVMF_PORT", 00:32:36.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:36.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:36.200 "hdgst": ${hdgst:-false}, 00:32:36.200 "ddgst": ${ddgst:-false} 00:32:36.200 }, 00:32:36.200 "method": "bdev_nvme_attach_controller" 00:32:36.200 } 00:32:36.200 EOF 00:32:36.200 )") 00:32:36.200 21:28:12 -- target/dif.sh@82 -- # gen_fio_conf 00:32:36.200 21:28:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:36.200 21:28:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:36.200 21:28:12 -- target/dif.sh@54 -- # local file 00:32:36.200 21:28:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:36.200 21:28:12 -- target/dif.sh@56 -- # cat 00:32:36.200 21:28:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.200 21:28:12 -- common/autotest_common.sh@1320 -- # shift 00:32:36.200 21:28:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:36.200 21:28:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.200 21:28:12 -- nvmf/common.sh@542 -- # cat 00:32:36.200 21:28:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.200 21:28:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:36.200 21:28:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:36.200 21:28:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:36.200 21:28:12 -- target/dif.sh@73 -- # cat 00:32:36.200 21:28:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:36.200 21:28:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:36.200 21:28:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:36.200 { 00:32:36.200 "params": { 00:32:36.200 "name": "Nvme$subsystem", 00:32:36.200 "trtype": "$TEST_TRANSPORT", 00:32:36.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:36.200 "adrfam": "ipv4", 00:32:36.200 "trsvcid": "$NVMF_PORT", 00:32:36.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:36.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:36.200 "hdgst": ${hdgst:-false}, 00:32:36.200 "ddgst": ${ddgst:-false} 00:32:36.200 }, 00:32:36.200 "method": "bdev_nvme_attach_controller" 00:32:36.200 } 00:32:36.200 EOF 00:32:36.200 )") 00:32:36.200 21:28:12 -- target/dif.sh@72 -- # (( file++ )) 00:32:36.200 21:28:12 -- target/dif.sh@72 -- # (( file <= files )) 00:32:36.200 21:28:12 -- nvmf/common.sh@542 -- # cat 00:32:36.200 21:28:12 -- nvmf/common.sh@544 -- # jq . 00:32:36.200 21:28:12 -- nvmf/common.sh@545 -- # IFS=, 00:32:36.200 21:28:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:36.200 "params": { 00:32:36.200 "name": "Nvme0", 00:32:36.200 "trtype": "tcp", 00:32:36.200 "traddr": "10.0.0.2", 00:32:36.200 "adrfam": "ipv4", 00:32:36.200 "trsvcid": "4420", 00:32:36.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:36.200 "hdgst": false, 00:32:36.200 "ddgst": false 00:32:36.201 }, 00:32:36.201 "method": "bdev_nvme_attach_controller" 00:32:36.201 },{ 00:32:36.201 "params": { 00:32:36.201 "name": "Nvme1", 00:32:36.201 "trtype": "tcp", 00:32:36.201 "traddr": "10.0.0.2", 00:32:36.201 "adrfam": "ipv4", 00:32:36.201 "trsvcid": "4420", 00:32:36.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:36.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:36.201 "hdgst": false, 00:32:36.201 "ddgst": false 00:32:36.201 }, 00:32:36.201 "method": "bdev_nvme_attach_controller" 00:32:36.201 }' 00:32:36.201 21:28:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:36.201 21:28:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:36.201 21:28:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.201 21:28:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.201 21:28:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:36.201 21:28:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:36.201 21:28:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:36.201 21:28:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:36.201 21:28:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:36.201 21:28:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:36.201 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:36.201 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:36.201 fio-3.35 00:32:36.201 Starting 2 threads 00:32:36.201 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.201 [2024-06-08 21:28:13.515069] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:36.201 [2024-06-08 21:28:13.515112] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:46.264 00:32:46.264 filename0: (groupid=0, jobs=1): err= 0: pid=2602307: Sat Jun 8 21:28:23 2024 00:32:46.264 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:32:46.264 slat (nsec): min=5327, max=34983, avg=6524.14, stdev=2039.02 00:32:46.264 clat (usec): min=41659, max=43216, avg=41990.80, stdev=123.38 00:32:46.264 lat (usec): min=41664, max=43248, avg=41997.32, stdev=123.90 00:32:46.264 clat percentiles (usec): 00:32:46.264 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:32:46.264 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:46.264 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:46.264 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:46.264 | 99.99th=[43254] 00:32:46.264 bw ( KiB/s): min= 352, max= 384, per=34.17%, avg=380.80, stdev= 9.85, samples=20 00:32:46.264 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:46.264 lat (msec) : 50=100.00% 00:32:46.264 cpu : usr=97.31%, sys=2.46%, ctx=13, majf=0, minf=142 00:32:46.264 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.264 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.264 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:46.264 filename1: (groupid=0, jobs=1): err= 0: pid=2602308: Sat Jun 8 21:28:23 2024 00:32:46.264 read: IOPS=183, BW=732KiB/s (750kB/s)(7344KiB/10027msec) 00:32:46.264 slat (nsec): min=5363, max=32622, avg=6409.87, stdev=1646.61 00:32:46.264 clat (usec): min=1159, max=43170, avg=21825.71, stdev=20277.68 00:32:46.264 lat (usec): min=1165, max=43203, avg=21832.12, stdev=20277.58 00:32:46.264 clat percentiles (usec): 00:32:46.264 | 1.00th=[ 1205], 5.00th=[ 1319], 10.00th=[ 1483], 20.00th=[ 1532], 00:32:46.264 | 30.00th=[ 1549], 40.00th=[ 1565], 50.00th=[41157], 60.00th=[41681], 00:32:46.264 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:46.264 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:32:46.264 | 99.99th=[43254] 00:32:46.264 bw ( KiB/s): min= 704, max= 768, per=65.81%, avg=732.85, stdev=32.62, samples=20 00:32:46.264 iops : min= 176, max= 192, avg=183.20, stdev= 8.17, samples=20 00:32:46.264 lat (msec) : 2=49.89%, 50=50.11% 00:32:46.264 cpu : usr=96.95%, sys=2.82%, ctx=21, majf=0, minf=155 00:32:46.264 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.264 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.264 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:46.264 00:32:46.264 Run status group 0 (all jobs): 00:32:46.264 READ: bw=1112KiB/s (1139kB/s), 381KiB/s-732KiB/s (390kB/s-750kB/s), io=10.9MiB (11.4MB), run=10027-10041msec 00:32:46.264 21:28:23 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:46.264 21:28:23 -- target/dif.sh@43 -- # local sub 00:32:46.264 21:28:23 -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.264 21:28:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:46.264 21:28:23 -- target/dif.sh@36 -- # local sub_id=0 00:32:46.264 21:28:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.264 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.264 21:28:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:46.264 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.264 21:28:23 -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.264 21:28:23 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:46.264 21:28:23 -- target/dif.sh@36 -- # local sub_id=1 00:32:46.264 21:28:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.264 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.264 21:28:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:46.264 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.264 00:32:46.264 real 0m11.485s 00:32:46.264 user 0m36.983s 00:32:46.264 sys 0m0.861s 00:32:46.264 21:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 ************************************ 00:32:46.264 END TEST fio_dif_1_multi_subsystems 00:32:46.264 ************************************ 00:32:46.264 21:28:23 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:46.264 21:28:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:46.264 21:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 ************************************ 00:32:46.264 START TEST fio_dif_rand_params 00:32:46.264 ************************************ 00:32:46.264 21:28:23 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:32:46.264 21:28:23 -- target/dif.sh@100 -- # local NULL_DIF 00:32:46.264 21:28:23 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:46.264 21:28:23 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:46.264 21:28:23 -- target/dif.sh@103 -- # bs=128k 00:32:46.264 21:28:23 -- target/dif.sh@103 -- # numjobs=3 00:32:46.264 21:28:23 -- target/dif.sh@103 -- # iodepth=3 00:32:46.264 21:28:23 -- target/dif.sh@103 -- # runtime=5 00:32:46.264 21:28:23 -- target/dif.sh@105 -- # create_subsystems 0 00:32:46.264 21:28:23 -- target/dif.sh@28 -- # local sub 00:32:46.264 21:28:23 -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.264 21:28:23 -- target/dif.sh@31 -- # create_subsystem 0 00:32:46.264 21:28:23 -- target/dif.sh@18 -- # local sub_id=0 00:32:46.264 21:28:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:46.264 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.264 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.264 bdev_null0 00:32:46.264 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.265 21:28:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:46.265 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.265 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.265 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.265 21:28:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:46.265 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.265 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.265 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.265 21:28:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.265 21:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:46.265 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:46.265 [2024-06-08 21:28:23.949394] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.265 21:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:46.265 21:28:23 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:46.265 21:28:23 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:46.265 21:28:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:46.265 21:28:23 -- nvmf/common.sh@520 -- # config=() 00:32:46.265 21:28:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.265 21:28:23 -- nvmf/common.sh@520 -- # local subsystem config 00:32:46.265 21:28:23 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.265 21:28:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:46.265 21:28:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:46.265 { 00:32:46.265 "params": { 00:32:46.265 "name": "Nvme$subsystem", 00:32:46.265 "trtype": "$TEST_TRANSPORT", 00:32:46.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.265 "adrfam": "ipv4", 00:32:46.265 "trsvcid": "$NVMF_PORT", 00:32:46.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.265 "hdgst": ${hdgst:-false}, 00:32:46.265 "ddgst": ${ddgst:-false} 00:32:46.265 }, 00:32:46.265 "method": "bdev_nvme_attach_controller" 00:32:46.265 } 00:32:46.265 EOF 00:32:46.265 )") 00:32:46.265 21:28:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:46.265 21:28:23 -- target/dif.sh@82 -- # gen_fio_conf 00:32:46.265 21:28:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.265 21:28:23 -- target/dif.sh@54 -- # local file 00:32:46.265 21:28:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:46.265 21:28:23 -- target/dif.sh@56 -- # cat 00:32:46.265 21:28:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.265 21:28:23 -- common/autotest_common.sh@1320 -- # shift 00:32:46.265 21:28:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:46.265 21:28:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.265 21:28:23 -- nvmf/common.sh@542 -- # cat 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.265 21:28:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:46.265 21:28:23 -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:46.265 21:28:23 -- nvmf/common.sh@544 -- # jq . 00:32:46.265 21:28:23 -- nvmf/common.sh@545 -- # IFS=, 00:32:46.265 21:28:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:46.265 "params": { 00:32:46.265 "name": "Nvme0", 00:32:46.265 "trtype": "tcp", 00:32:46.265 "traddr": "10.0.0.2", 00:32:46.265 "adrfam": "ipv4", 00:32:46.265 "trsvcid": "4420", 00:32:46.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.265 "hdgst": false, 00:32:46.265 "ddgst": false 00:32:46.265 }, 00:32:46.265 "method": "bdev_nvme_attach_controller" 00:32:46.265 }' 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:46.265 21:28:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:46.265 21:28:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:46.265 21:28:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:46.265 21:28:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:46.265 21:28:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:46.265 21:28:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:46.265 21:28:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.524 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:46.524 ... 00:32:46.524 fio-3.35 00:32:46.524 Starting 3 threads 00:32:46.524 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.784 [2024-06-08 21:28:24.851196] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:46.784 [2024-06-08 21:28:24.851250] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:52.069 00:32:52.069 filename0: (groupid=0, jobs=1): err= 0: pid=2604527: Sat Jun 8 21:28:30 2024 00:32:52.069 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(124MiB/5006msec) 00:32:52.069 slat (nsec): min=7787, max=30905, avg=8710.25, stdev=1122.37 00:32:52.069 clat (usec): min=5725, max=56712, avg=15107.96, stdev=14564.73 00:32:52.069 lat (usec): min=5734, max=56720, avg=15116.67, stdev=14564.84 00:32:52.069 clat percentiles (usec): 00:32:52.069 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 7767], 00:32:52.069 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10683], 00:32:52.069 | 70.00th=[11600], 80.00th=[12649], 90.00th=[51119], 95.00th=[53740], 00:32:52.069 | 99.00th=[55313], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:32:52.069 | 99.99th=[56886] 00:32:52.069 bw ( KiB/s): min=17664, max=36096, per=38.12%, avg=25344.00, stdev=6415.34, samples=10 00:32:52.069 iops : min= 138, max= 282, avg=198.00, stdev=50.12, samples=10 00:32:52.069 lat (msec) : 10=51.36%, 20=35.95%, 50=0.70%, 100=11.98% 00:32:52.069 cpu : usr=95.44%, sys=3.98%, ctx=81, majf=0, minf=57 00:32:52.069 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.069 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.069 filename0: (groupid=0, jobs=1): err= 0: pid=2604528: Sat Jun 8 21:28:30 2024 00:32:52.069 read: IOPS=110, BW=13.9MiB/s (14.5MB/s)(70.0MiB/5048msec) 00:32:52.069 slat (nsec): min=7827, max=30266, avg=8695.79, stdev=1616.74 00:32:52.069 clat (usec): min=7075, max=95210, avg=26941.66, stdev=21127.19 00:32:52.069 lat (usec): min=7083, max=95219, avg=26950.36, stdev=21127.04 00:32:52.069 clat percentiles (usec): 00:32:52.069 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[10290], 00:32:52.069 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13304], 60.00th=[15008], 00:32:52.069 | 70.00th=[51643], 80.00th=[53216], 90.00th=[54264], 95.00th=[55313], 00:32:52.069 | 99.00th=[94897], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:32:52.069 | 99.99th=[94897] 00:32:52.069 bw ( KiB/s): min=10752, max=17152, per=21.48%, avg=14284.80, stdev=2382.62, samples=10 00:32:52.069 iops : min= 84, max= 134, avg=111.60, stdev=18.61, samples=10 00:32:52.069 lat (msec) : 10=16.96%, 20=47.32%, 50=0.71%, 100=35.00% 00:32:52.069 cpu : usr=96.93%, sys=2.77%, ctx=9, majf=0, minf=83 00:32:52.069 IO depths : 1=5.0%, 2=95.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.069 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.069 filename0: (groupid=0, jobs=1): err= 0: pid=2604529: Sat Jun 8 21:28:30 2024 00:32:52.069 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5010msec) 00:32:52.069 slat (nsec): min=2990, max=17048, avg=5945.58, stdev=685.34 00:32:52.069 clat (usec): min=5499, max=94788, avg=14047.56, stdev=14393.91 00:32:52.069 lat (usec): min=5505, max=94794, avg=14053.51, stdev=14393.97 00:32:52.069 clat percentiles (usec): 00:32:52.069 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7308], 00:32:52.069 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9896], 00:32:52.069 | 70.00th=[10945], 80.00th=[12125], 90.00th=[49546], 95.00th=[52167], 00:32:52.069 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:32:52.069 | 99.99th=[94897] 00:32:52.069 bw ( KiB/s): min=20736, max=38912, per=41.05%, avg=27289.60, stdev=6253.48, samples=10 00:32:52.069 iops : min= 162, max= 304, avg=213.20, stdev=48.86, samples=10 00:32:52.069 lat (msec) : 10=61.09%, 20=27.88%, 50=1.96%, 100=9.07% 00:32:52.069 cpu : usr=96.23%, sys=3.21%, ctx=159, majf=0, minf=149 00:32:52.069 IO depths : 1=3.5%, 2=96.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.069 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.069 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:52.069 00:32:52.069 Run status group 0 (all jobs): 00:32:52.069 READ: bw=64.9MiB/s (68.1MB/s), 13.9MiB/s-26.7MiB/s (14.5MB/s-28.0MB/s), io=328MiB (344MB), run=5006-5048msec 00:32:52.069 21:28:30 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:52.069 21:28:30 -- target/dif.sh@43 -- # local sub 00:32:52.069 21:28:30 -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.069 21:28:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.069 21:28:30 -- target/dif.sh@36 -- # local sub_id=0 00:32:52.069 21:28:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.069 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.069 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.330 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # bs=4k 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # numjobs=8 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # iodepth=16 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # runtime= 00:32:52.330 21:28:30 -- target/dif.sh@109 -- # files=2 00:32:52.330 21:28:30 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:52.330 21:28:30 -- target/dif.sh@28 -- # local sub 00:32:52.330 21:28:30 -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.330 21:28:30 -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.330 21:28:30 -- target/dif.sh@18 -- # local sub_id=0 00:32:52.330 21:28:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.330 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 bdev_null0 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.330 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.330 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.330 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.330 [2024-06-08 21:28:30.216266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.330 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.330 21:28:30 -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.330 21:28:30 -- target/dif.sh@31 -- # create_subsystem 1 00:32:52.330 21:28:30 -- target/dif.sh@18 -- # local sub_id=1 00:32:52.330 21:28:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:52.330 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 bdev_null1 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.331 21:28:30 -- target/dif.sh@31 -- # create_subsystem 2 00:32:52.331 21:28:30 -- target/dif.sh@18 -- # local sub_id=2 00:32:52.331 21:28:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 bdev_null2 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:52.331 21:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.331 21:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:52.331 21:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.331 21:28:30 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:52.331 21:28:30 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:52.331 21:28:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:52.331 21:28:30 -- nvmf/common.sh@520 -- # config=() 00:32:52.331 21:28:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.331 21:28:30 -- nvmf/common.sh@520 -- # local subsystem config 00:32:52.331 21:28:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.331 21:28:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.331 21:28:30 -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.331 { 00:32:52.331 "params": { 00:32:52.331 "name": "Nvme$subsystem", 00:32:52.331 "trtype": "$TEST_TRANSPORT", 00:32:52.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.331 "adrfam": "ipv4", 00:32:52.331 "trsvcid": "$NVMF_PORT", 00:32:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.331 "hdgst": ${hdgst:-false}, 00:32:52.331 "ddgst": ${ddgst:-false} 00:32:52.331 }, 00:32:52.331 "method": "bdev_nvme_attach_controller" 00:32:52.331 } 00:32:52.331 EOF 00:32:52.331 )") 00:32:52.331 21:28:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:52.331 21:28:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.331 21:28:30 -- target/dif.sh@54 -- # local file 00:32:52.331 21:28:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:52.331 21:28:30 -- target/dif.sh@56 -- # cat 00:32:52.331 21:28:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.331 21:28:30 -- common/autotest_common.sh@1320 -- # shift 00:32:52.331 21:28:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:52.331 21:28:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # cat 00:32:52.331 21:28:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:52.331 21:28:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.331 21:28:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:52.331 21:28:30 -- target/dif.sh@73 -- # cat 00:32:52.331 21:28:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.331 { 00:32:52.331 "params": { 00:32:52.331 "name": "Nvme$subsystem", 00:32:52.331 "trtype": "$TEST_TRANSPORT", 00:32:52.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.331 "adrfam": "ipv4", 00:32:52.331 "trsvcid": "$NVMF_PORT", 00:32:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.331 "hdgst": ${hdgst:-false}, 00:32:52.331 "ddgst": ${ddgst:-false} 00:32:52.331 }, 00:32:52.331 "method": "bdev_nvme_attach_controller" 00:32:52.331 } 00:32:52.331 EOF 00:32:52.331 )") 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file++ )) 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.331 21:28:30 -- target/dif.sh@73 -- # cat 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # cat 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file++ )) 00:32:52.331 21:28:30 -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.331 21:28:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.331 { 00:32:52.331 "params": { 00:32:52.331 "name": "Nvme$subsystem", 00:32:52.331 "trtype": "$TEST_TRANSPORT", 00:32:52.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.331 "adrfam": "ipv4", 00:32:52.331 "trsvcid": "$NVMF_PORT", 00:32:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.331 "hdgst": ${hdgst:-false}, 00:32:52.331 "ddgst": ${ddgst:-false} 00:32:52.331 }, 00:32:52.331 "method": "bdev_nvme_attach_controller" 00:32:52.331 } 00:32:52.331 EOF 00:32:52.331 )") 00:32:52.331 21:28:30 -- nvmf/common.sh@542 -- # cat 00:32:52.331 21:28:30 -- nvmf/common.sh@544 -- # jq . 00:32:52.331 21:28:30 -- nvmf/common.sh@545 -- # IFS=, 00:32:52.331 21:28:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:52.331 "params": { 00:32:52.331 "name": "Nvme0", 00:32:52.331 "trtype": "tcp", 00:32:52.331 "traddr": "10.0.0.2", 00:32:52.331 "adrfam": "ipv4", 00:32:52.331 "trsvcid": "4420", 00:32:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.331 "hdgst": false, 00:32:52.331 "ddgst": false 00:32:52.331 }, 00:32:52.331 "method": "bdev_nvme_attach_controller" 00:32:52.331 },{ 00:32:52.331 "params": { 00:32:52.331 "name": "Nvme1", 00:32:52.331 "trtype": "tcp", 00:32:52.331 "traddr": "10.0.0.2", 00:32:52.331 "adrfam": "ipv4", 00:32:52.331 "trsvcid": "4420", 00:32:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.331 "hdgst": false, 00:32:52.331 "ddgst": false 00:32:52.331 }, 00:32:52.331 "method": "bdev_nvme_attach_controller" 00:32:52.331 },{ 00:32:52.332 "params": { 00:32:52.332 "name": "Nvme2", 00:32:52.332 "trtype": "tcp", 00:32:52.332 "traddr": "10.0.0.2", 00:32:52.332 "adrfam": "ipv4", 00:32:52.332 "trsvcid": "4420", 00:32:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:52.332 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:52.332 "hdgst": false, 00:32:52.332 "ddgst": false 00:32:52.332 }, 00:32:52.332 "method": "bdev_nvme_attach_controller" 00:32:52.332 }' 00:32:52.332 21:28:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:52.332 21:28:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:52.332 21:28:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.332 21:28:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.332 21:28:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:52.332 21:28:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:52.332 21:28:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:52.332 21:28:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:52.332 21:28:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.332 21:28:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.918 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.918 ... 00:32:52.918 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.918 ... 00:32:52.918 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:52.918 ... 00:32:52.918 fio-3.35 00:32:52.918 Starting 24 threads 00:32:52.918 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.488 [2024-06-08 21:28:31.463325] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:53.488 [2024-06-08 21:28:31.463376] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:05.715 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606047: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=532, BW=2129KiB/s (2181kB/s)(21.1MiB/10123msec) 00:33:05.716 slat (nsec): min=5503, max=85189, avg=12432.80, stdev=9544.00 00:33:05.716 clat (msec): min=4, max=133, avg=29.86, stdev= 7.01 00:33:05.716 lat (msec): min=4, max=133, avg=29.87, stdev= 7.01 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 24], 20.00th=[ 29], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 40], 00:33:05.716 | 99.00th=[ 46], 99.50th=[ 49], 99.90th=[ 130], 99.95th=[ 133], 00:33:05.716 | 99.99th=[ 133] 00:33:05.716 bw ( KiB/s): min= 1920, max= 2584, per=4.34%, avg=2148.70, stdev=130.74, samples=20 00:33:05.716 iops : min= 480, max= 646, avg=537.10, stdev=32.69, samples=20 00:33:05.716 lat (msec) : 10=1.43%, 20=3.90%, 50=94.38%, 100=0.11%, 250=0.19% 00:33:05.716 cpu : usr=98.96%, sys=0.71%, ctx=25, majf=0, minf=47 00:33:05.716 IO depths : 1=1.3%, 2=2.7%, 4=10.3%, 8=73.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606048: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.7MiB/10077msec) 00:33:05.716 slat (usec): min=5, max=268, avg=14.16, stdev=11.93 00:33:05.716 clat (msec): min=13, max=110, avg=31.92, stdev= 6.47 00:33:05.716 lat (msec): min=13, max=110, avg=31.93, stdev= 6.47 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 42], 00:33:05.716 | 99.00th=[ 51], 99.50th=[ 59], 99.90th=[ 110], 99.95th=[ 110], 00:33:05.716 | 99.99th=[ 110] 00:33:05.716 bw ( KiB/s): min= 1776, max= 2176, per=4.06%, avg=2008.10, stdev=126.08, samples=20 00:33:05.716 iops : min= 444, max= 544, avg=501.95, stdev=31.48, samples=20 00:33:05.716 lat (msec) : 20=0.93%, 50=97.78%, 100=0.97%, 250=0.32% 00:33:05.716 cpu : usr=96.74%, sys=1.75%, ctx=93, majf=0, minf=34 00:33:05.716 IO depths : 1=0.2%, 2=0.8%, 4=9.1%, 8=76.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606049: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.6MiB/10077msec) 00:33:05.716 slat (nsec): min=5495, max=89144, avg=14569.89, stdev=11758.44 00:33:05.716 clat (msec): min=14, max=113, avg=32.06, stdev= 7.08 00:33:05.716 lat (msec): min=14, max=113, avg=32.08, stdev= 7.08 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 20], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 29], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 43], 00:33:05.716 | 99.00th=[ 50], 99.50th=[ 53], 99.90th=[ 114], 99.95th=[ 114], 00:33:05.716 | 99.99th=[ 114] 00:33:05.716 bw ( KiB/s): min= 1744, max= 2104, per=4.03%, avg=1996.40, stdev=99.24, samples=20 00:33:05.716 iops : min= 436, max= 526, avg=498.95, stdev=24.76, samples=20 00:33:05.716 lat (msec) : 20=1.58%, 50=97.70%, 100=0.52%, 250=0.20% 00:33:05.716 cpu : usr=98.92%, sys=0.76%, ctx=14, majf=0, minf=40 00:33:05.716 IO depths : 1=0.9%, 2=1.8%, 4=9.5%, 8=73.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=90.8%, 8=6.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606050: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=536, BW=2146KiB/s (2198kB/s)(21.2MiB/10108msec) 00:33:05.716 slat (nsec): min=5493, max=56886, avg=9375.06, stdev=5339.63 00:33:05.716 clat (msec): min=10, max=113, avg=29.74, stdev= 5.42 00:33:05.716 lat (msec): min=10, max=113, avg=29.75, stdev= 5.42 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.716 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.716 | 99.99th=[ 113] 00:33:05.716 bw ( KiB/s): min= 2059, max= 2384, per=4.37%, avg=2160.45, stdev=70.49, samples=20 00:33:05.716 iops : min= 514, max= 596, avg=539.95, stdev=17.64, samples=20 00:33:05.716 lat (msec) : 20=2.25%, 50=97.42%, 100=0.04%, 250=0.29% 00:33:05.716 cpu : usr=99.11%, sys=0.52%, ctx=97, majf=0, minf=33 00:33:05.716 IO depths : 1=3.6%, 2=9.3%, 4=23.4%, 8=54.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606051: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=512, BW=2051KiB/s (2100kB/s)(20.2MiB/10075msec) 00:33:05.716 slat (nsec): min=5373, max=75771, avg=12085.12, stdev=7966.23 00:33:05.716 clat (msec): min=16, max=132, avg=31.14, stdev= 6.69 00:33:05.716 lat (msec): min=16, max=132, avg=31.15, stdev= 6.69 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 37], 95.00th=[ 41], 00:33:05.716 | 99.00th=[ 51], 99.50th=[ 59], 99.90th=[ 126], 99.95th=[ 133], 00:33:05.716 | 99.99th=[ 133] 00:33:05.716 bw ( KiB/s): min= 1664, max= 2176, per=4.16%, avg=2059.75, stdev=142.24, samples=20 00:33:05.716 iops : min= 416, max= 544, avg=514.90, stdev=35.55, samples=20 00:33:05.716 lat (msec) : 20=0.87%, 50=98.30%, 100=0.52%, 250=0.31% 00:33:05.716 cpu : usr=98.59%, sys=0.81%, ctx=23, majf=0, minf=39 00:33:05.716 IO depths : 1=0.4%, 2=0.9%, 4=4.3%, 8=78.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=89.9%, 8=8.3%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606052: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=534, BW=2137KiB/s (2189kB/s)(21.1MiB/10098msec) 00:33:05.716 slat (nsec): min=5508, max=74601, avg=11474.22, stdev=8384.94 00:33:05.716 clat (msec): min=14, max=113, avg=29.86, stdev= 6.50 00:33:05.716 lat (msec): min=14, max=113, avg=29.88, stdev= 6.50 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 29], 00:33:05.716 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 40], 00:33:05.716 | 99.00th=[ 48], 99.50th=[ 48], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.716 | 99.99th=[ 113] 00:33:05.716 bw ( KiB/s): min= 1923, max= 2320, per=4.35%, avg=2151.15, stdev=97.21, samples=20 00:33:05.716 iops : min= 480, max= 580, avg=537.60, stdev=24.29, samples=20 00:33:05.716 lat (msec) : 20=4.00%, 50=95.70%, 250=0.30% 00:33:05.716 cpu : usr=96.16%, sys=1.90%, ctx=224, majf=0, minf=29 00:33:05.716 IO depths : 1=1.0%, 2=5.4%, 4=19.8%, 8=61.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:05.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.716 issued rwts: total=5396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.716 filename0: (groupid=0, jobs=1): err= 0: pid=2606053: Sat Jun 8 21:28:41 2024 00:33:05.716 read: IOPS=503, BW=2013KiB/s (2062kB/s)(19.8MiB/10091msec) 00:33:05.716 slat (nsec): min=5495, max=89026, avg=11985.91, stdev=9613.78 00:33:05.716 clat (msec): min=13, max=113, avg=31.72, stdev= 7.87 00:33:05.716 lat (msec): min=13, max=113, avg=31.73, stdev= 7.87 00:33:05.716 clat percentiles (msec): 00:33:05.716 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 29], 00:33:05.716 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.716 | 70.00th=[ 32], 80.00th=[ 38], 90.00th=[ 40], 95.00th=[ 43], 00:33:05.716 | 99.00th=[ 52], 99.50th=[ 60], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.716 | 99.99th=[ 114] 00:33:05.717 bw ( KiB/s): min= 1872, max= 2280, per=4.09%, avg=2024.95, stdev=83.37, samples=20 00:33:05.717 iops : min= 468, max= 570, avg=506.20, stdev=20.83, samples=20 00:33:05.717 lat (msec) : 20=4.59%, 50=94.01%, 100=1.08%, 250=0.32% 00:33:05.717 cpu : usr=98.84%, sys=0.75%, ctx=14, majf=0, minf=33 00:33:05.717 IO depths : 1=0.7%, 2=1.8%, 4=9.8%, 8=74.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename0: (groupid=0, jobs=1): err= 0: pid=2606054: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=507, BW=2030KiB/s (2079kB/s)(20.0MiB/10091msec) 00:33:05.717 slat (nsec): min=5522, max=70622, avg=13893.75, stdev=9995.62 00:33:05.717 clat (msec): min=12, max=127, avg=31.42, stdev= 6.86 00:33:05.717 lat (msec): min=12, max=127, avg=31.44, stdev= 6.86 00:33:05.717 clat percentiles (msec): 00:33:05.717 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.717 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.717 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 39], 95.00th=[ 42], 00:33:05.717 | 99.00th=[ 48], 99.50th=[ 52], 99.90th=[ 128], 99.95th=[ 128], 00:33:05.717 | 99.99th=[ 128] 00:33:05.717 bw ( KiB/s): min= 1688, max= 2176, per=4.12%, avg=2040.70, stdev=137.87, samples=20 00:33:05.717 iops : min= 422, max= 544, avg=510.05, stdev=34.42, samples=20 00:33:05.717 lat (msec) : 20=1.27%, 50=98.15%, 100=0.27%, 250=0.31% 00:33:05.717 cpu : usr=99.17%, sys=0.49%, ctx=65, majf=0, minf=30 00:33:05.717 IO depths : 1=4.1%, 2=8.2%, 4=19.1%, 8=59.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=92.7%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606055: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=532, BW=2130KiB/s (2181kB/s)(21.0MiB/10096msec) 00:33:05.717 slat (nsec): min=5630, max=72280, avg=12318.32, stdev=8976.90 00:33:05.717 clat (msec): min=11, max=109, avg=29.94, stdev= 4.57 00:33:05.717 lat (msec): min=11, max=109, avg=29.96, stdev= 4.57 00:33:05.717 clat percentiles (msec): 00:33:05.717 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.717 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.717 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.717 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 110], 99.95th=[ 110], 00:33:05.717 | 99.99th=[ 110] 00:33:05.717 bw ( KiB/s): min= 2043, max= 2180, per=4.33%, avg=2141.45, stdev=57.10, samples=20 00:33:05.717 iops : min= 510, max= 545, avg=535.20, stdev=14.37, samples=20 00:33:05.717 lat (msec) : 20=0.32%, 50=99.39%, 250=0.30% 00:33:05.717 cpu : usr=99.34%, sys=0.36%, ctx=30, majf=0, minf=29 00:33:05.717 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606056: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.9MiB/10080msec) 00:33:05.717 slat (nsec): min=5566, max=85530, avg=16186.50, stdev=12117.65 00:33:05.717 clat (msec): min=19, max=114, avg=30.04, stdev= 4.77 00:33:05.717 lat (msec): min=19, max=114, avg=30.05, stdev= 4.77 00:33:05.717 clat percentiles (msec): 00:33:05.717 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.717 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.717 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.717 | 99.00th=[ 33], 99.50th=[ 47], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.717 | 99.99th=[ 115] 00:33:05.717 bw ( KiB/s): min= 2043, max= 2176, per=4.30%, avg=2129.95, stdev=61.84, samples=20 00:33:05.717 iops : min= 510, max= 544, avg=532.30, stdev=15.58, samples=20 00:33:05.717 lat (msec) : 20=0.04%, 50=99.66%, 250=0.30% 00:33:05.717 cpu : usr=98.38%, sys=0.76%, ctx=24, majf=0, minf=47 00:33:05.717 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606057: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=547, BW=2190KiB/s (2242kB/s)(21.6MiB/10083msec) 00:33:05.717 slat (nsec): min=5497, max=82637, avg=14309.90, stdev=10875.89 00:33:05.717 clat (msec): min=15, max=103, avg=29.10, stdev= 5.51 00:33:05.717 lat (msec): min=15, max=103, avg=29.12, stdev= 5.51 00:33:05.717 clat percentiles (msec): 00:33:05.717 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 29], 00:33:05.717 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:33:05.717 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.717 | 99.00th=[ 42], 99.50th=[ 48], 99.90th=[ 103], 99.95th=[ 103], 00:33:05.717 | 99.99th=[ 104] 00:33:05.717 bw ( KiB/s): min= 2032, max= 2720, per=4.45%, avg=2200.05, stdev=153.59, samples=20 00:33:05.717 iops : min= 508, max= 680, avg=549.90, stdev=38.49, samples=20 00:33:05.717 lat (msec) : 20=5.25%, 50=94.42%, 100=0.04%, 250=0.29% 00:33:05.717 cpu : usr=96.35%, sys=1.73%, ctx=46, majf=0, minf=31 00:33:05.717 IO depths : 1=4.2%, 2=9.3%, 4=21.4%, 8=56.6%, 16=8.4%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606058: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.8MiB/10103msec) 00:33:05.717 slat (nsec): min=5496, max=89070, avg=14238.70, stdev=11844.52 00:33:05.717 clat (msec): min=12, max=113, avg=31.79, stdev= 8.36 00:33:05.717 lat (msec): min=12, max=113, avg=31.80, stdev= 8.36 00:33:05.717 clat percentiles (msec): 00:33:05.717 | 1.00th=[ 19], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 29], 00:33:05.717 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.717 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 44], 00:33:05.717 | 99.00th=[ 56], 99.50th=[ 59], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.717 | 99.99th=[ 113] 00:33:05.717 bw ( KiB/s): min= 1795, max= 2208, per=4.08%, avg=2020.10, stdev=108.71, samples=20 00:33:05.717 iops : min= 448, max= 552, avg=504.80, stdev=27.11, samples=20 00:33:05.717 lat (msec) : 20=5.07%, 50=92.49%, 100=2.13%, 250=0.32% 00:33:05.717 cpu : usr=98.31%, sys=1.08%, ctx=192, majf=0, minf=44 00:33:05.717 IO depths : 1=1.3%, 2=3.1%, 4=11.5%, 8=71.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606059: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=527, BW=2112KiB/s (2163kB/s)(20.7MiB/10016msec) 00:33:05.717 slat (nsec): min=5499, max=73203, avg=12858.58, stdev=9663.08 00:33:05.717 clat (usec): min=2272, max=57034, avg=30217.81, stdev=6202.48 00:33:05.717 lat (usec): min=2285, max=57042, avg=30230.67, stdev=6202.23 00:33:05.717 clat percentiles (usec): 00:33:05.717 | 1.00th=[ 4490], 5.00th=[20055], 10.00th=[25560], 20.00th=[28705], 00:33:05.717 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30016], 60.00th=[30278], 00:33:05.717 | 70.00th=[30802], 80.00th=[31327], 90.00th=[38011], 95.00th=[40633], 00:33:05.717 | 99.00th=[49021], 99.50th=[51643], 99.90th=[55837], 99.95th=[56886], 00:33:05.717 | 99.99th=[56886] 00:33:05.717 bw ( KiB/s): min= 1952, max= 2656, per=4.27%, avg=2112.05, stdev=146.55, samples=20 00:33:05.717 iops : min= 488, max= 664, avg=527.90, stdev=36.62, samples=20 00:33:05.717 lat (msec) : 4=0.61%, 10=0.91%, 20=3.39%, 50=94.19%, 100=0.91% 00:33:05.717 cpu : usr=98.93%, sys=0.66%, ctx=94, majf=0, minf=36 00:33:05.717 IO depths : 1=0.7%, 2=1.5%, 4=10.6%, 8=74.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:33:05.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.717 issued rwts: total=5288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.717 filename1: (groupid=0, jobs=1): err= 0: pid=2606060: Sat Jun 8 21:28:41 2024 00:33:05.717 read: IOPS=530, BW=2122KiB/s (2173kB/s)(20.9MiB/10103msec) 00:33:05.718 slat (nsec): min=5557, max=95119, avg=17377.71, stdev=12604.61 00:33:05.718 clat (msec): min=18, max=113, avg=30.00, stdev= 4.73 00:33:05.718 lat (msec): min=18, max=113, avg=30.02, stdev= 4.73 00:33:05.718 clat percentiles (msec): 00:33:05.718 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.718 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.718 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.718 | 99.00th=[ 33], 99.50th=[ 43], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.718 | 99.99th=[ 113] 00:33:05.718 bw ( KiB/s): min= 2048, max= 2176, per=4.32%, avg=2135.45, stdev=57.77, samples=20 00:33:05.718 iops : min= 512, max= 544, avg=533.65, stdev=14.39, samples=20 00:33:05.718 lat (msec) : 20=0.07%, 50=99.63%, 250=0.30% 00:33:05.718 cpu : usr=96.52%, sys=1.69%, ctx=54, majf=0, minf=38 00:33:05.718 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename1: (groupid=0, jobs=1): err= 0: pid=2606061: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.8MiB/10100msec) 00:33:05.718 slat (nsec): min=5497, max=79363, avg=14502.45, stdev=11389.73 00:33:05.718 clat (msec): min=17, max=113, avg=31.81, stdev= 7.07 00:33:05.718 lat (msec): min=17, max=113, avg=31.82, stdev= 7.07 00:33:05.718 clat percentiles (msec): 00:33:05.718 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 29], 00:33:05.718 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.718 | 70.00th=[ 32], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 44], 00:33:05.718 | 99.00th=[ 49], 99.50th=[ 53], 99.90th=[ 114], 99.95th=[ 114], 00:33:05.718 | 99.99th=[ 114] 00:33:05.718 bw ( KiB/s): min= 1920, max= 2147, per=4.08%, avg=2020.20, stdev=53.04, samples=20 00:33:05.718 iops : min= 480, max= 536, avg=504.90, stdev=13.08, samples=20 00:33:05.718 lat (msec) : 20=1.20%, 50=98.17%, 100=0.32%, 250=0.32% 00:33:05.718 cpu : usr=98.68%, sys=0.86%, ctx=139, majf=0, minf=48 00:33:05.718 IO depths : 1=0.5%, 2=0.9%, 4=7.1%, 8=76.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=90.1%, 8=6.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=5069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename1: (groupid=0, jobs=1): err= 0: pid=2606062: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=530, BW=2124KiB/s (2175kB/s)(20.8MiB/10014msec) 00:33:05.718 slat (usec): min=5, max=102, avg=15.14, stdev=11.47 00:33:05.718 clat (usec): min=3983, max=52000, avg=30006.09, stdev=6740.07 00:33:05.718 lat (usec): min=4015, max=52008, avg=30021.23, stdev=6741.27 00:33:05.718 clat percentiles (usec): 00:33:05.718 | 1.00th=[ 8455], 5.00th=[19530], 10.00th=[20579], 20.00th=[28181], 00:33:05.718 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:33:05.718 | 70.00th=[30540], 80.00th=[35914], 90.00th=[39584], 95.00th=[41157], 00:33:05.718 | 99.00th=[44827], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:33:05.718 | 99.99th=[52167] 00:33:05.718 bw ( KiB/s): min= 1664, max= 3456, per=4.30%, avg=2125.79, stdev=396.36, samples=19 00:33:05.718 iops : min= 416, max= 864, avg=531.37, stdev=99.10, samples=19 00:33:05.718 lat (msec) : 4=0.02%, 10=1.18%, 20=6.64%, 50=91.93%, 100=0.23% 00:33:05.718 cpu : usr=99.02%, sys=0.65%, ctx=32, majf=0, minf=37 00:33:05.718 IO depths : 1=4.1%, 2=8.1%, 4=19.1%, 8=59.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=5317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename2: (groupid=0, jobs=1): err= 0: pid=2606063: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=532, BW=2131KiB/s (2183kB/s)(21.0MiB/10089msec) 00:33:05.718 slat (nsec): min=5534, max=68259, avg=9222.31, stdev=5839.64 00:33:05.718 clat (msec): min=16, max=109, avg=29.95, stdev= 4.70 00:33:05.718 lat (msec): min=16, max=109, avg=29.96, stdev= 4.70 00:33:05.718 clat percentiles (msec): 00:33:05.718 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.718 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.718 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.718 | 99.00th=[ 32], 99.50th=[ 46], 99.90th=[ 110], 99.95th=[ 110], 00:33:05.718 | 99.99th=[ 110] 00:33:05.718 bw ( KiB/s): min= 1920, max= 2304, per=4.33%, avg=2143.00, stdev=91.72, samples=20 00:33:05.718 iops : min= 480, max= 576, avg=535.60, stdev=22.94, samples=20 00:33:05.718 lat (msec) : 20=0.60%, 50=99.11%, 250=0.30% 00:33:05.718 cpu : usr=99.33%, sys=0.38%, ctx=7, majf=0, minf=27 00:33:05.718 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename2: (groupid=0, jobs=1): err= 0: pid=2606064: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=526, BW=2107KiB/s (2157kB/s)(20.6MiB/10016msec) 00:33:05.718 slat (usec): min=5, max=101, avg=14.36, stdev=11.90 00:33:05.718 clat (usec): min=13259, max=55936, avg=30281.97, stdev=4112.46 00:33:05.718 lat (usec): min=13265, max=55945, avg=30296.33, stdev=4112.70 00:33:05.718 clat percentiles (usec): 00:33:05.718 | 1.00th=[19006], 5.00th=[24773], 10.00th=[28181], 20.00th=[28967], 00:33:05.718 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:33:05.718 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31851], 95.00th=[39584], 00:33:05.718 | 99.00th=[45876], 99.50th=[47973], 99.90th=[55837], 99.95th=[55837], 00:33:05.718 | 99.99th=[55837] 00:33:05.718 bw ( KiB/s): min= 1792, max= 2219, per=4.25%, avg=2104.85, stdev=101.16, samples=20 00:33:05.718 iops : min= 448, max= 554, avg=526.10, stdev=25.18, samples=20 00:33:05.718 lat (msec) : 20=1.86%, 50=97.84%, 100=0.30% 00:33:05.718 cpu : usr=98.93%, sys=0.62%, ctx=87, majf=0, minf=37 00:33:05.718 IO depths : 1=1.0%, 2=2.0%, 4=11.8%, 8=71.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=91.1%, 8=5.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=5275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename2: (groupid=0, jobs=1): err= 0: pid=2606065: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.5MiB/10077msec) 00:33:05.718 slat (nsec): min=5502, max=84790, avg=15351.85, stdev=12118.94 00:33:05.718 clat (msec): min=13, max=113, avg=32.18, stdev= 7.07 00:33:05.718 lat (msec): min=13, max=113, avg=32.20, stdev= 7.07 00:33:05.718 clat percentiles (msec): 00:33:05.718 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 29], 00:33:05.718 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.718 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 43], 00:33:05.718 | 99.00th=[ 49], 99.50th=[ 57], 99.90th=[ 113], 99.95th=[ 114], 00:33:05.718 | 99.99th=[ 114] 00:33:05.718 bw ( KiB/s): min= 1788, max= 2104, per=4.02%, avg=1988.05, stdev=82.60, samples=20 00:33:05.718 iops : min= 447, max= 526, avg=496.90, stdev=20.65, samples=20 00:33:05.718 lat (msec) : 20=1.64%, 50=97.41%, 100=0.74%, 250=0.20% 00:33:05.718 cpu : usr=98.92%, sys=0.76%, ctx=16, majf=0, minf=44 00:33:05.718 IO depths : 1=1.4%, 2=2.9%, 4=11.8%, 8=70.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=91.2%, 8=5.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.718 filename2: (groupid=0, jobs=1): err= 0: pid=2606066: Sat Jun 8 21:28:41 2024 00:33:05.718 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.9MiB/10098msec) 00:33:05.718 slat (nsec): min=5529, max=79712, avg=12094.32, stdev=9386.53 00:33:05.718 clat (msec): min=19, max=101, avg=30.04, stdev= 4.14 00:33:05.718 lat (msec): min=19, max=101, avg=30.05, stdev= 4.14 00:33:05.718 clat percentiles (msec): 00:33:05.718 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.718 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.718 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:33:05.718 | 99.00th=[ 37], 99.50th=[ 45], 99.90th=[ 102], 99.95th=[ 102], 00:33:05.718 | 99.99th=[ 102] 00:33:05.718 bw ( KiB/s): min= 1923, max= 2176, per=4.32%, avg=2136.75, stdev=72.59, samples=20 00:33:05.718 iops : min= 480, max= 544, avg=534.00, stdev=18.26, samples=20 00:33:05.718 lat (msec) : 20=0.04%, 50=99.66%, 250=0.30% 00:33:05.718 cpu : usr=99.25%, sys=0.45%, ctx=23, majf=0, minf=41 00:33:05.718 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.719 filename2: (groupid=0, jobs=1): err= 0: pid=2606067: Sat Jun 8 21:28:41 2024 00:33:05.719 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.8MiB/10076msec) 00:33:05.719 slat (nsec): min=5504, max=84105, avg=13557.79, stdev=10699.04 00:33:05.719 clat (msec): min=15, max=119, avg=31.72, stdev= 7.04 00:33:05.719 lat (msec): min=15, max=119, avg=31.73, stdev= 7.04 00:33:05.719 clat percentiles (msec): 00:33:05.719 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:33:05.719 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.719 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 40], 95.00th=[ 42], 00:33:05.719 | 99.00th=[ 55], 99.50th=[ 59], 99.90th=[ 120], 99.95th=[ 120], 00:33:05.719 | 99.99th=[ 120] 00:33:05.719 bw ( KiB/s): min= 1715, max= 2176, per=4.09%, avg=2021.50, stdev=110.10, samples=20 00:33:05.719 iops : min= 428, max= 544, avg=505.30, stdev=27.60, samples=20 00:33:05.719 lat (msec) : 20=1.78%, 50=96.61%, 100=1.30%, 250=0.32% 00:33:05.719 cpu : usr=99.19%, sys=0.51%, ctx=27, majf=0, minf=38 00:33:05.719 IO depths : 1=0.3%, 2=0.8%, 4=8.1%, 8=77.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:33:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 issued rwts: total=5070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.719 filename2: (groupid=0, jobs=1): err= 0: pid=2606068: Sat Jun 8 21:28:41 2024 00:33:05.719 read: IOPS=483, BW=1934KiB/s (1980kB/s)(19.0MiB/10075msec) 00:33:05.719 slat (nsec): min=5505, max=86578, avg=15018.66, stdev=12246.30 00:33:05.719 clat (msec): min=15, max=113, avg=33.00, stdev= 7.55 00:33:05.719 lat (msec): min=15, max=113, avg=33.01, stdev= 7.55 00:33:05.719 clat percentiles (msec): 00:33:05.719 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 30], 00:33:05.719 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:33:05.719 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 45], 00:33:05.719 | 99.00th=[ 52], 99.50th=[ 59], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.719 | 99.99th=[ 114] 00:33:05.719 bw ( KiB/s): min= 1776, max= 2104, per=3.92%, avg=1941.75, stdev=74.71, samples=20 00:33:05.719 iops : min= 444, max= 526, avg=485.40, stdev=18.68, samples=20 00:33:05.719 lat (msec) : 20=0.76%, 50=98.01%, 100=0.94%, 250=0.29% 00:33:05.719 cpu : usr=99.18%, sys=0.52%, ctx=13, majf=0, minf=48 00:33:05.719 IO depths : 1=0.1%, 2=0.3%, 4=8.2%, 8=77.0%, 16=14.4%, 32=0.0%, >=64=0.0% 00:33:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 issued rwts: total=4871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.719 filename2: (groupid=0, jobs=1): err= 0: pid=2606069: Sat Jun 8 21:28:41 2024 00:33:05.719 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.7MiB/10099msec) 00:33:05.719 slat (nsec): min=5496, max=74021, avg=13313.10, stdev=9884.89 00:33:05.719 clat (msec): min=12, max=113, avg=31.94, stdev= 7.47 00:33:05.719 lat (msec): min=12, max=113, avg=31.95, stdev= 7.47 00:33:05.719 clat percentiles (msec): 00:33:05.719 | 1.00th=[ 19], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 29], 00:33:05.719 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:33:05.719 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 42], 00:33:05.719 | 99.00th=[ 50], 99.50th=[ 58], 99.90th=[ 113], 99.95th=[ 113], 00:33:05.719 | 99.99th=[ 114] 00:33:05.719 bw ( KiB/s): min= 1792, max= 2176, per=4.07%, avg=2011.20, stdev=101.95, samples=20 00:33:05.719 iops : min= 448, max= 544, avg=502.65, stdev=25.50, samples=20 00:33:05.719 lat (msec) : 20=3.84%, 50=95.34%, 100=0.50%, 250=0.32% 00:33:05.719 cpu : usr=98.81%, sys=0.81%, ctx=16, majf=0, minf=48 00:33:05.719 IO depths : 1=2.2%, 2=4.8%, 4=14.2%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 complete : 0=0.0%, 4=91.5%, 8=3.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 issued rwts: total=5046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.719 filename2: (groupid=0, jobs=1): err= 0: pid=2606070: Sat Jun 8 21:28:41 2024 00:33:05.719 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.5MiB/10091msec) 00:33:05.719 slat (nsec): min=5511, max=83765, avg=12595.05, stdev=9236.61 00:33:05.719 clat (msec): min=14, max=113, avg=30.73, stdev= 6.86 00:33:05.719 lat (msec): min=14, max=113, avg=30.75, stdev= 6.86 00:33:05.719 clat percentiles (msec): 00:33:05.719 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 29], 00:33:05.719 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:33:05.719 | 70.00th=[ 31], 80.00th=[ 32], 90.00th=[ 40], 95.00th=[ 41], 00:33:05.719 | 99.00th=[ 50], 99.50th=[ 51], 99.90th=[ 113], 99.95th=[ 114], 00:33:05.719 | 99.99th=[ 114] 00:33:05.719 bw ( KiB/s): min= 1920, max= 2192, per=4.22%, avg=2088.35, stdev=85.48, samples=20 00:33:05.719 iops : min= 480, max= 548, avg=522.05, stdev=21.36, samples=20 00:33:05.719 lat (msec) : 20=3.07%, 50=96.32%, 100=0.31%, 250=0.31% 00:33:05.719 cpu : usr=99.16%, sys=0.50%, ctx=104, majf=0, minf=47 00:33:05.719 IO depths : 1=0.9%, 2=4.9%, 4=18.1%, 8=63.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:33:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 complete : 0=0.0%, 4=92.7%, 8=2.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.719 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:05.719 00:33:05.719 Run status group 0 (all jobs): 00:33:05.719 READ: bw=48.3MiB/s (50.7MB/s), 1934KiB/s-2190KiB/s (1980kB/s-2242kB/s), io=489MiB (513MB), run=10014-10123msec 00:33:05.719 21:28:41 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:05.719 21:28:41 -- target/dif.sh@43 -- # local sub 00:33:05.719 21:28:41 -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.719 21:28:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:05.719 21:28:41 -- target/dif.sh@36 -- # local sub_id=0 00:33:05.719 21:28:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.719 21:28:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:05.719 21:28:41 -- target/dif.sh@36 -- # local sub_id=1 00:33:05.719 21:28:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.719 21:28:41 -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:05.719 21:28:41 -- target/dif.sh@36 -- # local sub_id=2 00:33:05.719 21:28:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # NULL_DIF=1 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # numjobs=2 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # iodepth=8 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # runtime=5 00:33:05.719 21:28:41 -- target/dif.sh@115 -- # files=1 00:33:05.719 21:28:41 -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:05.719 21:28:41 -- target/dif.sh@28 -- # local sub 00:33:05.719 21:28:41 -- target/dif.sh@30 -- # for sub in "$@" 00:33:05.719 21:28:41 -- target/dif.sh@31 -- # create_subsystem 0 00:33:05.719 21:28:41 -- target/dif.sh@18 -- # local sub_id=0 00:33:05.719 21:28:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 bdev_null0 00:33:05.719 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.719 21:28:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:05.719 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.719 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:05.720 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 21:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.720 21:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 [2024-06-08 21:28:42.000249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.720 21:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:42 -- target/dif.sh@30 -- # for sub in "$@" 00:33:05.720 21:28:42 -- target/dif.sh@31 -- # create_subsystem 1 00:33:05.720 21:28:42 -- target/dif.sh@18 -- # local sub_id=1 00:33:05.720 21:28:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:05.720 21:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 bdev_null1 00:33:05.720 21:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:05.720 21:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 21:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:05.720 21:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 21:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.720 21:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.720 21:28:42 -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 21:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.720 21:28:42 -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:05.720 21:28:42 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:05.720 21:28:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:05.720 21:28:42 -- nvmf/common.sh@520 -- # config=() 00:33:05.720 21:28:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.720 21:28:42 -- nvmf/common.sh@520 -- # local subsystem config 00:33:05.720 21:28:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:05.720 21:28:42 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.720 21:28:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:05.720 { 00:33:05.720 "params": { 00:33:05.720 "name": "Nvme$subsystem", 00:33:05.720 "trtype": "$TEST_TRANSPORT", 00:33:05.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.720 "adrfam": "ipv4", 00:33:05.720 "trsvcid": "$NVMF_PORT", 00:33:05.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.720 "hdgst": ${hdgst:-false}, 00:33:05.720 "ddgst": ${ddgst:-false} 00:33:05.720 }, 00:33:05.720 "method": "bdev_nvme_attach_controller" 00:33:05.720 } 00:33:05.720 EOF 00:33:05.720 )") 00:33:05.720 21:28:42 -- target/dif.sh@82 -- # gen_fio_conf 00:33:05.720 21:28:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:05.720 21:28:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.720 21:28:42 -- target/dif.sh@54 -- # local file 00:33:05.720 21:28:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:05.720 21:28:42 -- target/dif.sh@56 -- # cat 00:33:05.720 21:28:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.720 21:28:42 -- common/autotest_common.sh@1320 -- # shift 00:33:05.720 21:28:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:05.720 21:28:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.720 21:28:42 -- nvmf/common.sh@542 -- # cat 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.720 21:28:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:05.720 21:28:42 -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:05.720 21:28:42 -- target/dif.sh@73 -- # cat 00:33:05.720 21:28:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:05.720 21:28:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:05.720 { 00:33:05.720 "params": { 00:33:05.720 "name": "Nvme$subsystem", 00:33:05.720 "trtype": "$TEST_TRANSPORT", 00:33:05.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.720 "adrfam": "ipv4", 00:33:05.720 "trsvcid": "$NVMF_PORT", 00:33:05.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.720 "hdgst": ${hdgst:-false}, 00:33:05.720 "ddgst": ${ddgst:-false} 00:33:05.720 }, 00:33:05.720 "method": "bdev_nvme_attach_controller" 00:33:05.720 } 00:33:05.720 EOF 00:33:05.720 )") 00:33:05.720 21:28:42 -- target/dif.sh@72 -- # (( file++ )) 00:33:05.720 21:28:42 -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.720 21:28:42 -- nvmf/common.sh@542 -- # cat 00:33:05.720 21:28:42 -- nvmf/common.sh@544 -- # jq . 00:33:05.720 21:28:42 -- nvmf/common.sh@545 -- # IFS=, 00:33:05.720 21:28:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:05.720 "params": { 00:33:05.720 "name": "Nvme0", 00:33:05.720 "trtype": "tcp", 00:33:05.720 "traddr": "10.0.0.2", 00:33:05.720 "adrfam": "ipv4", 00:33:05.720 "trsvcid": "4420", 00:33:05.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.720 "hdgst": false, 00:33:05.720 "ddgst": false 00:33:05.720 }, 00:33:05.720 "method": "bdev_nvme_attach_controller" 00:33:05.720 },{ 00:33:05.720 "params": { 00:33:05.720 "name": "Nvme1", 00:33:05.720 "trtype": "tcp", 00:33:05.720 "traddr": "10.0.0.2", 00:33:05.720 "adrfam": "ipv4", 00:33:05.720 "trsvcid": "4420", 00:33:05.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.720 "hdgst": false, 00:33:05.720 "ddgst": false 00:33:05.720 }, 00:33:05.720 "method": "bdev_nvme_attach_controller" 00:33:05.720 }' 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:05.720 21:28:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:05.720 21:28:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:05.720 21:28:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:05.720 21:28:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:05.720 21:28:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:05.720 21:28:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.720 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:05.720 ... 00:33:05.720 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:05.720 ... 00:33:05.720 fio-3.35 00:33:05.720 Starting 4 threads 00:33:05.720 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.720 [2024-06-08 21:28:43.183882] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:05.720 [2024-06-08 21:28:43.183924] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:11.004 00:33:11.004 filename0: (groupid=0, jobs=1): err= 0: pid=2608522: Sat Jun 8 21:28:48 2024 00:33:11.004 read: IOPS=2509, BW=19.6MiB/s (20.6MB/s)(98.1MiB/5002msec) 00:33:11.004 slat (nsec): min=7786, max=88567, avg=9849.91, stdev=2494.35 00:33:11.004 clat (usec): min=1169, max=6059, avg=3159.60, stdev=557.40 00:33:11.004 lat (usec): min=1180, max=6083, avg=3169.45, stdev=557.62 00:33:11.004 clat percentiles (usec): 00:33:11.005 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2737], 00:33:11.005 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3261], 00:33:11.005 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 3818], 95.00th=[ 4146], 00:33:11.005 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5669], 00:33:11.005 | 99.99th=[ 6063] 00:33:11.005 bw ( KiB/s): min=19472, max=20544, per=29.16%, avg=20108.44, stdev=346.11, samples=9 00:33:11.005 iops : min= 2434, max= 2568, avg=2513.56, stdev=43.26, samples=9 00:33:11.005 lat (msec) : 2=1.40%, 4=91.34%, 10=7.26% 00:33:11.005 cpu : usr=96.26%, sys=3.30%, ctx=58, majf=0, minf=0 00:33:11.005 IO depths : 1=0.8%, 2=2.9%, 4=68.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:11.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 issued rwts: total=12551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:11.005 filename0: (groupid=0, jobs=1): err= 0: pid=2608523: Sat Jun 8 21:28:48 2024 00:33:11.005 read: IOPS=1990, BW=15.5MiB/s (16.3MB/s)(77.8MiB/5002msec) 00:33:11.005 slat (nsec): min=7776, max=62271, avg=8543.61, stdev=1971.00 00:33:11.005 clat (usec): min=2023, max=8373, avg=3997.33, stdev=694.45 00:33:11.005 lat (usec): min=2032, max=8405, avg=4005.87, stdev=694.45 00:33:11.005 clat percentiles (usec): 00:33:11.005 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3425], 00:33:11.005 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3916], 60.00th=[ 4080], 00:33:11.005 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5276], 00:33:11.005 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[ 8291], 00:33:11.005 | 99.99th=[ 8356] 00:33:11.005 bw ( KiB/s): min=15696, max=16480, per=23.03%, avg=15879.11, stdev=242.40, samples=9 00:33:11.005 iops : min= 1962, max= 2060, avg=1984.89, stdev=30.30, samples=9 00:33:11.005 lat (msec) : 4=55.67%, 10=44.33% 00:33:11.005 cpu : usr=97.48%, sys=2.26%, ctx=5, majf=0, minf=9 00:33:11.005 IO depths : 1=0.2%, 2=1.3%, 4=68.7%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:11.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 issued rwts: total=9955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:11.005 filename1: (groupid=0, jobs=1): err= 0: pid=2608524: Sat Jun 8 21:28:48 2024 00:33:11.005 read: IOPS=2124, BW=16.6MiB/s (17.4MB/s)(83.0MiB/5002msec) 00:33:11.005 slat (nsec): min=5334, max=84423, avg=5972.34, stdev=1770.16 00:33:11.005 clat (usec): min=1394, max=7946, avg=3750.05, stdev=644.70 00:33:11.005 lat (usec): min=1399, max=7952, avg=3756.03, stdev=644.67 00:33:11.005 clat percentiles (usec): 00:33:11.005 | 1.00th=[ 2474], 5.00th=[ 2802], 10.00th=[ 2999], 20.00th=[ 3228], 00:33:11.005 | 30.00th=[ 3392], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3851], 00:33:11.005 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4948], 00:33:11.005 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6259], 99.95th=[ 6390], 00:33:11.005 | 99.99th=[ 7963] 00:33:11.005 bw ( KiB/s): min=16561, max=17312, per=24.61%, avg=16972.56, stdev=252.62, samples=9 00:33:11.005 iops : min= 2070, max= 2164, avg=2121.56, stdev=31.60, samples=9 00:33:11.005 lat (msec) : 2=0.08%, 4=69.83%, 10=30.10% 00:33:11.005 cpu : usr=96.82%, sys=2.92%, ctx=8, majf=0, minf=9 00:33:11.005 IO depths : 1=0.2%, 2=1.4%, 4=68.6%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:11.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 issued rwts: total=10625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:11.005 filename1: (groupid=0, jobs=1): err= 0: pid=2608525: Sat Jun 8 21:28:48 2024 00:33:11.005 read: IOPS=1995, BW=15.6MiB/s (16.3MB/s)(78.0MiB/5002msec) 00:33:11.005 slat (nsec): min=5341, max=61155, avg=5966.63, stdev=1731.91 00:33:11.005 clat (usec): min=1991, max=8996, avg=3992.53, stdev=708.44 00:33:11.005 lat (usec): min=1996, max=9027, avg=3998.50, stdev=708.52 00:33:11.005 clat percentiles (usec): 00:33:11.005 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3392], 00:33:11.005 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3916], 60.00th=[ 4080], 00:33:11.005 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5276], 00:33:11.005 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7635], 99.95th=[ 8979], 00:33:11.005 | 99.99th=[ 8979] 00:33:11.005 bw ( KiB/s): min=15744, max=16144, per=23.11%, avg=15937.78, stdev=132.05, samples=9 00:33:11.005 iops : min= 1968, max= 2018, avg=1992.22, stdev=16.51, samples=9 00:33:11.005 lat (msec) : 2=0.02%, 4=55.65%, 10=44.33% 00:33:11.005 cpu : usr=96.68%, sys=3.08%, ctx=10, majf=0, minf=9 00:33:11.005 IO depths : 1=0.1%, 2=1.2%, 4=69.2%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:11.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.005 issued rwts: total=9981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:11.005 00:33:11.005 Run status group 0 (all jobs): 00:33:11.005 READ: bw=67.3MiB/s (70.6MB/s), 15.5MiB/s-19.6MiB/s (16.3MB/s-20.6MB/s), io=337MiB (353MB), run=5002-5002msec 00:33:11.005 21:28:48 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:11.005 21:28:48 -- target/dif.sh@43 -- # local sub 00:33:11.005 21:28:48 -- target/dif.sh@45 -- # for sub in "$@" 00:33:11.005 21:28:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:11.005 21:28:48 -- target/dif.sh@36 -- # local sub_id=0 00:33:11.005 21:28:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.005 21:28:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.005 21:28:48 -- target/dif.sh@45 -- # for sub in "$@" 00:33:11.005 21:28:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:11.005 21:28:48 -- target/dif.sh@36 -- # local sub_id=1 00:33:11.005 21:28:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.005 21:28:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.005 00:33:11.005 real 0m24.580s 00:33:11.005 user 5m16.212s 00:33:11.005 sys 0m4.139s 00:33:11.005 21:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 ************************************ 00:33:11.005 END TEST fio_dif_rand_params 00:33:11.005 ************************************ 00:33:11.005 21:28:48 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:11.005 21:28:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:11.005 21:28:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 ************************************ 00:33:11.005 START TEST fio_dif_digest 00:33:11.005 ************************************ 00:33:11.005 21:28:48 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:11.005 21:28:48 -- target/dif.sh@123 -- # local NULL_DIF 00:33:11.005 21:28:48 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:11.005 21:28:48 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:11.005 21:28:48 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:11.005 21:28:48 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:11.005 21:28:48 -- target/dif.sh@127 -- # numjobs=3 00:33:11.005 21:28:48 -- target/dif.sh@127 -- # iodepth=3 00:33:11.005 21:28:48 -- target/dif.sh@127 -- # runtime=10 00:33:11.005 21:28:48 -- target/dif.sh@128 -- # hdgst=true 00:33:11.005 21:28:48 -- target/dif.sh@128 -- # ddgst=true 00:33:11.005 21:28:48 -- target/dif.sh@130 -- # create_subsystems 0 00:33:11.005 21:28:48 -- target/dif.sh@28 -- # local sub 00:33:11.005 21:28:48 -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.005 21:28:48 -- target/dif.sh@31 -- # create_subsystem 0 00:33:11.005 21:28:48 -- target/dif.sh@18 -- # local sub_id=0 00:33:11.005 21:28:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 bdev_null0 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.005 21:28:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:11.005 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.005 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.005 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.006 21:28:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:11.006 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.006 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.006 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.006 21:28:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.006 21:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.006 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:11.006 [2024-06-08 21:28:48.580985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.006 21:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.006 21:28:48 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:11.006 21:28:48 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:11.006 21:28:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:11.006 21:28:48 -- nvmf/common.sh@520 -- # config=() 00:33:11.006 21:28:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.006 21:28:48 -- nvmf/common.sh@520 -- # local subsystem config 00:33:11.006 21:28:48 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.006 21:28:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:11.006 21:28:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:11.006 21:28:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:11.006 { 00:33:11.006 "params": { 00:33:11.006 "name": "Nvme$subsystem", 00:33:11.006 "trtype": "$TEST_TRANSPORT", 00:33:11.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.006 "adrfam": "ipv4", 00:33:11.006 "trsvcid": "$NVMF_PORT", 00:33:11.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.006 "hdgst": ${hdgst:-false}, 00:33:11.006 "ddgst": ${ddgst:-false} 00:33:11.006 }, 00:33:11.006 "method": "bdev_nvme_attach_controller" 00:33:11.006 } 00:33:11.006 EOF 00:33:11.006 )") 00:33:11.006 21:28:48 -- target/dif.sh@82 -- # gen_fio_conf 00:33:11.006 21:28:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.006 21:28:48 -- target/dif.sh@54 -- # local file 00:33:11.006 21:28:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:11.006 21:28:48 -- target/dif.sh@56 -- # cat 00:33:11.006 21:28:48 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.006 21:28:48 -- common/autotest_common.sh@1320 -- # shift 00:33:11.006 21:28:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:11.006 21:28:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.006 21:28:48 -- nvmf/common.sh@542 -- # cat 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:11.006 21:28:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:11.006 21:28:48 -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:11.006 21:28:48 -- nvmf/common.sh@544 -- # jq . 00:33:11.006 21:28:48 -- nvmf/common.sh@545 -- # IFS=, 00:33:11.006 21:28:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:11.006 "params": { 00:33:11.006 "name": "Nvme0", 00:33:11.006 "trtype": "tcp", 00:33:11.006 "traddr": "10.0.0.2", 00:33:11.006 "adrfam": "ipv4", 00:33:11.006 "trsvcid": "4420", 00:33:11.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.006 "hdgst": true, 00:33:11.006 "ddgst": true 00:33:11.006 }, 00:33:11.006 "method": "bdev_nvme_attach_controller" 00:33:11.006 }' 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:11.006 21:28:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:11.006 21:28:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:11.006 21:28:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:11.006 21:28:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:11.006 21:28:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.006 21:28:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.006 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:11.006 ... 00:33:11.006 fio-3.35 00:33:11.006 Starting 3 threads 00:33:11.006 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.266 [2024-06-08 21:28:49.334743] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:11.266 [2024-06-08 21:28:49.334784] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:23.502 00:33:23.502 filename0: (groupid=0, jobs=1): err= 0: pid=2609811: Sat Jun 8 21:28:59 2024 00:33:23.502 read: IOPS=110, BW=13.8MiB/s (14.4MB/s)(138MiB/10014msec) 00:33:23.502 slat (nsec): min=5606, max=30879, avg=7879.41, stdev=1915.84 00:33:23.502 clat (usec): min=8744, max=97821, avg=27244.27, stdev=20447.31 00:33:23.502 lat (usec): min=8750, max=97828, avg=27252.15, stdev=20447.17 00:33:23.502 clat percentiles (usec): 00:33:23.502 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11076], 20.00th=[11994], 00:33:23.502 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14353], 60.00th=[15401], 00:33:23.503 | 70.00th=[52691], 80.00th=[53740], 90.00th=[54789], 95.00th=[55837], 00:33:23.503 | 99.00th=[57934], 99.50th=[94897], 99.90th=[95945], 99.95th=[98042], 00:33:23.503 | 99.99th=[98042] 00:33:23.503 bw ( KiB/s): min= 9984, max=19968, per=33.21%, avg=14065.95, stdev=3156.35, samples=20 00:33:23.503 iops : min= 78, max= 156, avg=109.85, stdev=24.68, samples=20 00:33:23.503 lat (msec) : 10=1.91%, 20=64.52%, 50=0.09%, 100=33.48% 00:33:23.503 cpu : usr=96.92%, sys=2.80%, ctx=17, majf=0, minf=95 00:33:23.503 IO depths : 1=6.5%, 2=93.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 issued rwts: total=1102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.503 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.503 filename0: (groupid=0, jobs=1): err= 0: pid=2609812: Sat Jun 8 21:28:59 2024 00:33:23.503 read: IOPS=114, BW=14.3MiB/s (15.0MB/s)(144MiB/10045msec) 00:33:23.503 slat (nsec): min=5611, max=30974, avg=8111.20, stdev=2037.40 00:33:23.503 clat (usec): min=8308, max=96866, avg=26152.43, stdev=19469.34 00:33:23.503 lat (usec): min=8314, max=96873, avg=26160.54, stdev=19469.30 00:33:23.503 clat percentiles (usec): 00:33:23.503 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:33:23.503 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14484], 60.00th=[15401], 00:33:23.503 | 70.00th=[51119], 80.00th=[53740], 90.00th=[54789], 95.00th=[55313], 00:33:23.503 | 99.00th=[56886], 99.50th=[92799], 99.90th=[94897], 99.95th=[96994], 00:33:23.503 | 99.99th=[96994] 00:33:23.503 bw ( KiB/s): min= 8448, max=24576, per=34.70%, avg=14694.40, stdev=3655.85, samples=20 00:33:23.503 iops : min= 66, max= 192, avg=114.80, stdev=28.56, samples=20 00:33:23.503 lat (msec) : 10=2.26%, 20=66.52%, 50=0.35%, 100=30.87% 00:33:23.503 cpu : usr=97.10%, sys=2.63%, ctx=14, majf=0, minf=209 00:33:23.503 IO depths : 1=10.6%, 2=89.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.503 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.503 filename0: (groupid=0, jobs=1): err= 0: pid=2609813: Sat Jun 8 21:28:59 2024 00:33:23.503 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(134MiB/10047msec) 00:33:23.503 slat (nsec): min=5730, max=31548, avg=8526.20, stdev=1639.21 00:33:23.503 clat (msec): min=8, max=134, avg=28.10, stdev=22.04 00:33:23.503 lat (msec): min=8, max=134, avg=28.11, stdev=22.04 00:33:23.503 clat percentiles (msec): 00:33:23.503 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:33:23.503 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:33:23.503 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 56], 95.00th=[ 57], 00:33:23.503 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 99], 99.95th=[ 136], 00:33:23.503 | 99.99th=[ 136] 00:33:23.503 bw ( KiB/s): min= 8192, max=19200, per=32.31%, avg=13684.75, stdev=3129.31, samples=20 00:33:23.503 iops : min= 64, max= 150, avg=106.90, stdev=24.44, samples=20 00:33:23.503 lat (msec) : 10=5.22%, 20=60.54%, 50=0.37%, 100=33.77%, 250=0.09% 00:33:23.503 cpu : usr=97.11%, sys=2.61%, ctx=14, majf=0, minf=204 00:33:23.503 IO depths : 1=5.2%, 2=94.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.503 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.503 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:23.503 00:33:23.503 Run status group 0 (all jobs): 00:33:23.503 READ: bw=41.4MiB/s (43.4MB/s), 13.3MiB/s-14.3MiB/s (14.0MB/s-15.0MB/s), io=416MiB (436MB), run=10014-10047msec 00:33:23.503 21:28:59 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:23.503 21:28:59 -- target/dif.sh@43 -- # local sub 00:33:23.503 21:28:59 -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.503 21:28:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.503 21:28:59 -- target/dif.sh@36 -- # local sub_id=0 00:33:23.503 21:28:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.503 21:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.503 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:23.503 21:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.503 21:28:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.503 21:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.503 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:23.503 21:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.503 00:33:23.503 real 0m11.119s 00:33:23.503 user 0m42.376s 00:33:23.503 sys 0m1.137s 00:33:23.503 21:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:23.503 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:23.503 ************************************ 00:33:23.503 END TEST fio_dif_digest 00:33:23.503 ************************************ 00:33:23.503 21:28:59 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:23.503 21:28:59 -- target/dif.sh@147 -- # nvmftestfini 00:33:23.503 21:28:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:23.503 21:28:59 -- nvmf/common.sh@116 -- # sync 00:33:23.503 21:28:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:23.503 21:28:59 -- nvmf/common.sh@119 -- # set +e 00:33:23.503 21:28:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:23.503 21:28:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:23.503 rmmod nvme_tcp 00:33:23.503 rmmod nvme_fabrics 00:33:23.503 rmmod nvme_keyring 00:33:23.503 21:28:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:23.503 21:28:59 -- nvmf/common.sh@123 -- # set -e 00:33:23.503 21:28:59 -- nvmf/common.sh@124 -- # return 0 00:33:23.503 21:28:59 -- nvmf/common.sh@477 -- # '[' -n 2599312 ']' 00:33:23.503 21:28:59 -- nvmf/common.sh@478 -- # killprocess 2599312 00:33:23.503 21:28:59 -- common/autotest_common.sh@926 -- # '[' -z 2599312 ']' 00:33:23.503 21:28:59 -- common/autotest_common.sh@930 -- # kill -0 2599312 00:33:23.503 21:28:59 -- common/autotest_common.sh@931 -- # uname 00:33:23.503 21:28:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:23.503 21:28:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2599312 00:33:23.503 21:28:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:23.503 21:28:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:23.503 21:28:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2599312' 00:33:23.503 killing process with pid 2599312 00:33:23.503 21:28:59 -- common/autotest_common.sh@945 -- # kill 2599312 00:33:23.503 21:28:59 -- common/autotest_common.sh@950 -- # wait 2599312 00:33:23.503 21:28:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:23.503 21:28:59 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:25.444 Waiting for block devices as requested 00:33:25.444 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:25.444 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:25.444 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:25.444 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:25.444 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:25.703 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:25.703 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:25.703 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:25.963 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:25.963 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:26.224 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:26.224 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:26.224 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:26.224 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:26.485 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:26.485 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:26.485 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:26.745 21:29:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:26.745 21:29:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:26.745 21:29:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.745 21:29:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:26.745 21:29:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.745 21:29:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.745 21:29:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.290 21:29:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:29.290 00:33:29.290 real 1m17.405s 00:33:29.290 user 8m3.446s 00:33:29.290 sys 0m19.100s 00:33:29.290 21:29:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:29.290 21:29:06 -- common/autotest_common.sh@10 -- # set +x 00:33:29.290 ************************************ 00:33:29.290 END TEST nvmf_dif 00:33:29.290 ************************************ 00:33:29.290 21:29:06 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:29.290 21:29:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:29.290 21:29:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:29.290 21:29:06 -- common/autotest_common.sh@10 -- # set +x 00:33:29.290 ************************************ 00:33:29.290 START TEST nvmf_abort_qd_sizes 00:33:29.290 ************************************ 00:33:29.290 21:29:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:29.290 * Looking for test storage... 00:33:29.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.290 21:29:06 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.290 21:29:06 -- nvmf/common.sh@7 -- # uname -s 00:33:29.290 21:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.290 21:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.290 21:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.290 21:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.290 21:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.290 21:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.290 21:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.290 21:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.290 21:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.290 21:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.290 21:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:29.290 21:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:29.290 21:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.290 21:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.290 21:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.290 21:29:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.290 21:29:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.290 21:29:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.290 21:29:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.290 21:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.290 21:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.290 21:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.290 21:29:06 -- paths/export.sh@5 -- # export PATH 00:33:29.290 21:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.290 21:29:06 -- nvmf/common.sh@46 -- # : 0 00:33:29.290 21:29:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:29.290 21:29:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:29.290 21:29:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:29.290 21:29:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.290 21:29:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.290 21:29:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:29.290 21:29:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:29.290 21:29:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:29.290 21:29:06 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:29.290 21:29:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:29.290 21:29:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.290 21:29:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:29.290 21:29:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:29.290 21:29:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:29.290 21:29:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.290 21:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.290 21:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.290 21:29:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:29.290 21:29:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:29.290 21:29:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:29.290 21:29:06 -- common/autotest_common.sh@10 -- # set +x 00:33:35.876 21:29:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:35.876 21:29:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:35.876 21:29:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:35.876 21:29:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:35.876 21:29:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:35.876 21:29:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:35.876 21:29:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:35.876 21:29:13 -- nvmf/common.sh@294 -- # net_devs=() 00:33:35.876 21:29:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:35.876 21:29:13 -- nvmf/common.sh@295 -- # e810=() 00:33:35.876 21:29:13 -- nvmf/common.sh@295 -- # local -ga e810 00:33:35.876 21:29:13 -- nvmf/common.sh@296 -- # x722=() 00:33:35.876 21:29:13 -- nvmf/common.sh@296 -- # local -ga x722 00:33:35.876 21:29:13 -- nvmf/common.sh@297 -- # mlx=() 00:33:35.876 21:29:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:35.876 21:29:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.876 21:29:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:35.876 21:29:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:35.876 21:29:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:35.876 21:29:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:35.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:35.876 21:29:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:35.876 21:29:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:35.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:35.876 21:29:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:35.876 21:29:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.876 21:29:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.876 21:29:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:35.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:35.876 21:29:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.876 21:29:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:35.876 21:29:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.876 21:29:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.876 21:29:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:35.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:35.876 21:29:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.876 21:29:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:35.876 21:29:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:35.876 21:29:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:35.876 21:29:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.876 21:29:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.876 21:29:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.876 21:29:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:35.876 21:29:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.876 21:29:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.876 21:29:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:35.876 21:29:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.876 21:29:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.876 21:29:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:35.876 21:29:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:35.876 21:29:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.876 21:29:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.876 21:29:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.876 21:29:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.876 21:29:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:35.877 21:29:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.877 21:29:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.877 21:29:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.137 21:29:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:36.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:33:36.137 00:33:36.137 --- 10.0.0.2 ping statistics --- 00:33:36.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.137 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:33:36.137 21:29:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:33:36.137 00:33:36.137 --- 10.0.0.1 ping statistics --- 00:33:36.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.137 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:33:36.137 21:29:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.137 21:29:14 -- nvmf/common.sh@410 -- # return 0 00:33:36.137 21:29:14 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:36.137 21:29:14 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:39.435 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:39.435 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:39.695 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:39.695 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:39.955 21:29:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.955 21:29:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:39.955 21:29:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:39.955 21:29:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.955 21:29:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:39.955 21:29:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:39.955 21:29:17 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:39.955 21:29:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:39.955 21:29:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:39.955 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:33:39.955 21:29:17 -- nvmf/common.sh@469 -- # nvmfpid=2619322 00:33:39.955 21:29:17 -- nvmf/common.sh@470 -- # waitforlisten 2619322 00:33:39.955 21:29:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:39.955 21:29:17 -- common/autotest_common.sh@819 -- # '[' -z 2619322 ']' 00:33:39.955 21:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.955 21:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:39.956 21:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.956 21:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:39.956 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:33:39.956 [2024-06-08 21:29:17.974283] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:39.956 [2024-06-08 21:29:17.974338] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.956 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.956 [2024-06-08 21:29:18.043242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.216 [2024-06-08 21:29:18.117153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:40.216 [2024-06-08 21:29:18.117288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.216 [2024-06-08 21:29:18.117298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.216 [2024-06-08 21:29:18.117306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.216 [2024-06-08 21:29:18.117435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.216 [2024-06-08 21:29:18.117559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.216 [2024-06-08 21:29:18.117694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.216 [2024-06-08 21:29:18.117695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.788 21:29:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:40.788 21:29:18 -- common/autotest_common.sh@852 -- # return 0 00:33:40.788 21:29:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:40.788 21:29:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:40.788 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:33:40.788 21:29:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:40.788 21:29:18 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:40.788 21:29:18 -- scripts/common.sh@312 -- # local nvmes 00:33:40.788 21:29:18 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:33:40.788 21:29:18 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:40.788 21:29:18 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:40.788 21:29:18 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:40.788 21:29:18 -- scripts/common.sh@322 -- # uname -s 00:33:40.788 21:29:18 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:40.788 21:29:18 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:40.788 21:29:18 -- scripts/common.sh@327 -- # (( 1 )) 00:33:40.788 21:29:18 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:40.788 21:29:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:40.788 21:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:40.788 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:33:40.788 ************************************ 00:33:40.788 START TEST spdk_target_abort 00:33:40.788 ************************************ 00:33:40.788 21:29:18 -- common/autotest_common.sh@1104 -- # spdk_target 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:40.788 21:29:18 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:40.788 21:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.788 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:33:41.049 spdk_targetn1 00:33:41.049 21:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.049 21:29:19 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.049 21:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.049 21:29:19 -- common/autotest_common.sh@10 -- # set +x 00:33:41.049 [2024-06-08 21:29:19.113243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.049 21:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.049 21:29:19 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:41.049 21:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.049 21:29:19 -- common/autotest_common.sh@10 -- # set +x 00:33:41.049 21:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.049 21:29:19 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:41.049 21:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.049 21:29:19 -- common/autotest_common.sh@10 -- # set +x 00:33:41.049 21:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.049 21:29:19 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:41.049 21:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.049 21:29:19 -- common/autotest_common.sh@10 -- # set +x 00:33:41.310 [2024-06-08 21:29:19.141517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.310 21:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:41.310 21:29:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:41.311 21:29:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:41.311 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.572 [2024-06-08 21:29:19.442806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:336 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.442828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.450914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:528 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.450930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0045 p:1 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.474943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1168 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.474960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0095 p:1 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.497924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1816 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.497940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.535866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2792 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.535882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.553577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3344 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.553592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a5 p:0 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.569476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3752 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.569491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:33:41.572 [2024-06-08 21:29:19.571994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3848 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:41.572 [2024-06-08 21:29:19.572008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e3 p:0 m:0 dnr:0 00:33:44.875 Initializing NVMe Controllers 00:33:44.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:44.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:44.875 Initialization complete. Launching workers. 00:33:44.875 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6525, failed: 8 00:33:44.875 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2544, failed to submit 3989 00:33:44.875 success 568, unsuccess 1976, failed 0 00:33:44.875 21:29:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:44.875 21:29:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:44.875 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.875 [2024-06-08 21:29:22.703467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2000 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:33:44.875 [2024-06-08 21:29:22.703510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00fb p:1 m:0 dnr:0 00:33:44.875 [2024-06-08 21:29:22.743546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:2904 len:8 PRP1 0x200007c58000 PRP2 0x0 00:33:44.875 [2024-06-08 21:29:22.743571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:44.875 [2024-06-08 21:29:22.767542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3552 len:8 PRP1 0x200007c58000 PRP2 0x0 00:33:44.875 [2024-06-08 21:29:22.767566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00c0 p:0 m:0 dnr:0 00:33:44.875 [2024-06-08 21:29:22.775543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3648 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:33:44.875 [2024-06-08 21:29:22.775565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:33:46.261 [2024-06-08 21:29:24.029550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:32128 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:33:46.261 [2024-06-08 21:29:24.029593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:33:48.176 Initializing NVMe Controllers 00:33:48.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:48.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:48.176 Initialization complete. Launching workers. 00:33:48.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8514, failed: 5 00:33:48.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1205, failed to submit 7314 00:33:48.176 success 350, unsuccess 855, failed 0 00:33:48.176 21:29:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:48.176 21:29:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:48.177 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.177 [2024-06-08 21:29:25.887040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:1896 len:8 PRP1 0x2000078f4000 PRP2 0x0 00:33:48.177 [2024-06-08 21:29:25.887068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:007d p:1 m:0 dnr:0 00:33:50.767 [2024-06-08 21:29:28.216842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:144 nsid:1 lba:262480 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:50.767 [2024-06-08 21:29:28.216867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:144 cdw0:0 sqhd:00b0 p:0 m:0 dnr:0 00:33:51.034 Initializing NVMe Controllers 00:33:51.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:51.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:51.034 Initialization complete. Launching workers. 00:33:51.034 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 42003, failed: 2 00:33:51.034 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2567, failed to submit 39438 00:33:51.034 success 667, unsuccess 1900, failed 0 00:33:51.034 21:29:28 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:51.034 21:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.034 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:33:51.034 21:29:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.034 21:29:28 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:51.034 21:29:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.034 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:33:52.950 21:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.950 21:29:30 -- target/abort_qd_sizes.sh@62 -- # killprocess 2619322 00:33:52.950 21:29:30 -- common/autotest_common.sh@926 -- # '[' -z 2619322 ']' 00:33:52.950 21:29:30 -- common/autotest_common.sh@930 -- # kill -0 2619322 00:33:52.950 21:29:30 -- common/autotest_common.sh@931 -- # uname 00:33:52.950 21:29:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:52.950 21:29:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2619322 00:33:52.950 21:29:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:52.950 21:29:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:52.950 21:29:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2619322' 00:33:52.950 killing process with pid 2619322 00:33:52.950 21:29:30 -- common/autotest_common.sh@945 -- # kill 2619322 00:33:52.950 21:29:30 -- common/autotest_common.sh@950 -- # wait 2619322 00:33:52.950 00:33:52.950 real 0m12.123s 00:33:52.950 user 0m49.010s 00:33:52.950 sys 0m1.936s 00:33:52.950 21:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.950 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:33:52.950 ************************************ 00:33:52.950 END TEST spdk_target_abort 00:33:52.950 ************************************ 00:33:52.950 21:29:30 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:52.950 21:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:52.950 21:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.950 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:33:52.950 ************************************ 00:33:52.950 START TEST kernel_target_abort 00:33:52.950 ************************************ 00:33:52.950 21:29:30 -- common/autotest_common.sh@1104 -- # kernel_target 00:33:52.950 21:29:30 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:52.950 21:29:30 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:52.950 21:29:30 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:52.950 21:29:30 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:52.950 21:29:30 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:52.950 21:29:30 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:52.950 21:29:30 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:52.950 21:29:30 -- nvmf/common.sh@627 -- # local block nvme 00:33:52.950 21:29:30 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:52.950 21:29:30 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:52.950 21:29:31 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:52.950 21:29:31 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.256 Waiting for block devices as requested 00:33:56.256 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:56.516 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:56.516 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:56.516 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:56.778 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:56.778 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:56.778 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:57.039 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:57.039 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:57.300 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:57.300 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:57.300 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:57.300 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:57.561 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:57.561 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:57.561 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:57.822 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:58.083 21:29:35 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:33:58.083 21:29:35 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:58.083 21:29:35 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:33:58.083 21:29:35 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:33:58.083 21:29:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:58.083 No valid GPT data, bailing 00:33:58.083 21:29:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:58.083 21:29:35 -- scripts/common.sh@393 -- # pt= 00:33:58.083 21:29:35 -- scripts/common.sh@394 -- # return 1 00:33:58.083 21:29:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:33:58.083 21:29:35 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:33:58.083 21:29:35 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:58.083 21:29:35 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:58.083 21:29:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:58.083 21:29:36 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:33:58.083 21:29:36 -- nvmf/common.sh@654 -- # echo 1 00:33:58.083 21:29:36 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:33:58.083 21:29:36 -- nvmf/common.sh@656 -- # echo 1 00:33:58.083 21:29:36 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:33:58.083 21:29:36 -- nvmf/common.sh@663 -- # echo tcp 00:33:58.083 21:29:36 -- nvmf/common.sh@664 -- # echo 4420 00:33:58.083 21:29:36 -- nvmf/common.sh@665 -- # echo ipv4 00:33:58.083 21:29:36 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:58.083 21:29:36 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:58.083 00:33:58.083 Discovery Log Number of Records 2, Generation counter 2 00:33:58.083 =====Discovery Log Entry 0====== 00:33:58.083 trtype: tcp 00:33:58.083 adrfam: ipv4 00:33:58.083 subtype: current discovery subsystem 00:33:58.083 treq: not specified, sq flow control disable supported 00:33:58.083 portid: 1 00:33:58.083 trsvcid: 4420 00:33:58.083 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:58.083 traddr: 10.0.0.1 00:33:58.083 eflags: none 00:33:58.083 sectype: none 00:33:58.083 =====Discovery Log Entry 1====== 00:33:58.083 trtype: tcp 00:33:58.083 adrfam: ipv4 00:33:58.083 subtype: nvme subsystem 00:33:58.083 treq: not specified, sq flow control disable supported 00:33:58.083 portid: 1 00:33:58.083 trsvcid: 4420 00:33:58.083 subnqn: kernel_target 00:33:58.083 traddr: 10.0.0.1 00:33:58.083 eflags: none 00:33:58.083 sectype: none 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.083 21:29:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:58.083 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.385 Initializing NVMe Controllers 00:34:01.385 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:01.385 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:01.385 Initialization complete. Launching workers. 00:34:01.385 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 41390, failed: 0 00:34:01.385 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 41390, failed to submit 0 00:34:01.385 success 0, unsuccess 41390, failed 0 00:34:01.385 21:29:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:01.385 21:29:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.691 Initializing NVMe Controllers 00:34:04.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:04.691 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:04.691 Initialization complete. Launching workers. 00:34:04.691 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 81566, failed: 0 00:34:04.691 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20538, failed to submit 61028 00:34:04.691 success 0, unsuccess 20538, failed 0 00:34:04.691 21:29:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.691 21:29:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:34:04.691 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.991 Initializing NVMe Controllers 00:34:07.991 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:34:07.991 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:34:07.991 Initialization complete. Launching workers. 00:34:07.991 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 79291, failed: 0 00:34:07.991 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19798, failed to submit 59493 00:34:07.991 success 0, unsuccess 19798, failed 0 00:34:07.991 21:29:45 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:34:07.991 21:29:45 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:34:07.991 21:29:45 -- nvmf/common.sh@677 -- # echo 0 00:34:07.991 21:29:45 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:34:07.991 21:29:45 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:07.991 21:29:45 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:07.991 21:29:45 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:34:07.991 21:29:45 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:34:07.991 21:29:45 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:34:07.991 00:34:07.991 real 0m14.456s 00:34:07.991 user 0m5.921s 00:34:07.991 sys 0m4.096s 00:34:07.991 21:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:07.991 21:29:45 -- common/autotest_common.sh@10 -- # set +x 00:34:07.991 ************************************ 00:34:07.991 END TEST kernel_target_abort 00:34:07.991 ************************************ 00:34:07.991 21:29:45 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:34:07.991 21:29:45 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:34:07.991 21:29:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:07.991 21:29:45 -- nvmf/common.sh@116 -- # sync 00:34:07.991 21:29:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:07.991 21:29:45 -- nvmf/common.sh@119 -- # set +e 00:34:07.991 21:29:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:07.991 21:29:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:07.991 rmmod nvme_tcp 00:34:07.991 rmmod nvme_fabrics 00:34:07.991 rmmod nvme_keyring 00:34:07.991 21:29:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:07.991 21:29:45 -- nvmf/common.sh@123 -- # set -e 00:34:07.991 21:29:45 -- nvmf/common.sh@124 -- # return 0 00:34:07.991 21:29:45 -- nvmf/common.sh@477 -- # '[' -n 2619322 ']' 00:34:07.991 21:29:45 -- nvmf/common.sh@478 -- # killprocess 2619322 00:34:07.991 21:29:45 -- common/autotest_common.sh@926 -- # '[' -z 2619322 ']' 00:34:07.991 21:29:45 -- common/autotest_common.sh@930 -- # kill -0 2619322 00:34:07.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2619322) - No such process 00:34:07.991 21:29:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2619322 is not found' 00:34:07.991 Process with pid 2619322 is not found 00:34:07.991 21:29:45 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:07.991 21:29:45 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:10.539 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:10.539 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:10.539 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:10.800 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:11.061 21:29:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:11.061 21:29:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:11.061 21:29:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.061 21:29:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:11.061 21:29:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.061 21:29:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.061 21:29:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.609 21:29:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:13.609 00:34:13.609 real 0m44.237s 00:34:13.609 user 0m59.826s 00:34:13.609 sys 0m16.247s 00:34:13.609 21:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.609 21:29:51 -- common/autotest_common.sh@10 -- # set +x 00:34:13.609 ************************************ 00:34:13.609 END TEST nvmf_abort_qd_sizes 00:34:13.609 ************************************ 00:34:13.609 21:29:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:13.609 21:29:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:13.609 21:29:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:13.609 21:29:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:13.609 21:29:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:13.610 21:29:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:13.610 21:29:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:13.610 21:29:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:13.610 21:29:51 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:34:13.610 21:29:51 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:34:13.610 21:29:51 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:34:13.610 21:29:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:13.610 21:29:51 -- common/autotest_common.sh@10 -- # set +x 00:34:13.610 21:29:51 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:34:13.610 21:29:51 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:34:13.610 21:29:51 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:34:13.610 21:29:51 -- common/autotest_common.sh@10 -- # set +x 00:34:21.757 INFO: APP EXITING 00:34:21.757 INFO: killing all VMs 00:34:21.757 INFO: killing vhost app 00:34:21.757 WARN: no vhost pid file found 00:34:21.757 INFO: EXIT DONE 00:34:24.308 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:24.308 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:24.308 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:27.614 Cleaning 00:34:27.614 Removing: /var/run/dpdk/spdk0/config 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:27.875 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:27.875 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:27.875 Removing: /var/run/dpdk/spdk1/config 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:27.875 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:27.875 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:27.875 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:27.875 Removing: /var/run/dpdk/spdk2/config 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:27.875 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:27.875 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:27.875 Removing: /var/run/dpdk/spdk3/config 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:27.875 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:27.875 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:27.875 Removing: /var/run/dpdk/spdk4/config 00:34:27.875 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:27.875 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:27.875 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:27.875 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:27.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:27.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:27.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:27.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:27.876 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:27.876 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:27.876 Removing: /dev/shm/bdev_svc_trace.1 00:34:27.876 Removing: /dev/shm/nvmf_trace.0 00:34:27.876 Removing: /dev/shm/spdk_tgt_trace.pid2158624 00:34:27.876 Removing: /var/run/dpdk/spdk0 00:34:27.876 Removing: /var/run/dpdk/spdk1 00:34:27.876 Removing: /var/run/dpdk/spdk2 00:34:27.876 Removing: /var/run/dpdk/spdk3 00:34:27.876 Removing: /var/run/dpdk/spdk4 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2157071 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2158624 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2159227 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2160366 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2161005 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2161273 00:34:27.876 Removing: /var/run/dpdk/spdk_pid2161605 00:34:28.137 Removing: /var/run/dpdk/spdk_pid2162006 00:34:28.137 Removing: /var/run/dpdk/spdk_pid2162391 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2162631 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2162794 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2163167 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2164568 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2167866 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2168234 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2168599 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2168741 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2169310 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2169327 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2169861 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2170039 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2170401 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2170424 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2170783 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2170806 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2171311 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2171587 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2171976 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2172342 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2172371 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2172429 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2172764 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2173119 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2173239 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2173487 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2173830 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2174182 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2174340 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2174551 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2174891 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2175242 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2175472 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2175636 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2175949 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2176300 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2176597 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2176766 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2177011 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2177360 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2177698 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2177898 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2178072 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2178424 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2178760 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2179010 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2179153 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2179482 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2179819 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2180155 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2180288 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2180540 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2180882 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2181234 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2181468 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2181647 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2181953 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2182308 00:34:28.138 Removing: /var/run/dpdk/spdk_pid2182644 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2182841 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2183016 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2183368 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2183493 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2183838 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2188426 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2286396 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2291453 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2303331 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2309836 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2314709 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2315495 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2325879 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2326237 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2331304 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2338645 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2341745 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2353977 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2364715 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2366744 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2367843 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2388171 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2393176 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2398551 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2400571 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2402707 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2402960 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2403303 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2403446 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2404057 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2406414 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2407501 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2407917 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2414666 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2421302 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2427120 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2472389 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2477231 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2485081 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2486640 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2488193 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2493308 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2498364 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2507147 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2507155 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2512217 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2512558 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2512744 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2513236 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2513253 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2514618 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2516639 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2518665 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2520567 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2522574 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2524524 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2531940 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2532771 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2534187 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2535631 00:34:28.400 Removing: /var/run/dpdk/spdk_pid2541842 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2545020 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2551497 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2558223 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2565293 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2566053 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2566747 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2567435 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2568505 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2569203 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2569887 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2570580 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2575664 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2576002 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2583182 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2583479 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2586548 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2593702 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2593708 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2599614 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2601846 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2604370 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2605633 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2608121 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2609641 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2619533 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2620102 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2620706 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2623687 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2624360 00:34:28.662 Removing: /var/run/dpdk/spdk_pid2624848 00:34:28.662 Clean 00:34:28.662 killing process with pid 2100209 00:34:38.699 killing process with pid 2100206 00:34:38.699 killing process with pid 2100208 00:34:38.699 killing process with pid 2100207 00:34:38.699 21:30:16 -- common/autotest_common.sh@1436 -- # return 0 00:34:38.699 21:30:16 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:34:38.699 21:30:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:38.699 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:34:38.699 21:30:16 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:34:38.699 21:30:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:38.699 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:34:38.699 21:30:16 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:38.699 21:30:16 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:38.699 21:30:16 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:38.699 21:30:16 -- spdk/autotest.sh@394 -- # hash lcov 00:34:38.699 21:30:16 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:38.699 21:30:16 -- spdk/autotest.sh@396 -- # hostname 00:34:38.699 21:30:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:38.960 geninfo: WARNING: invalid characters removed from testname! 00:35:00.924 21:30:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:02.307 21:30:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:04.218 21:30:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:05.603 21:30:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:07.028 21:30:44 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:08.411 21:30:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.796 21:30:47 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:09.796 21:30:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.796 21:30:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:09.796 21:30:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.796 21:30:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.796 21:30:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.796 21:30:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.796 21:30:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.796 21:30:47 -- paths/export.sh@5 -- $ export PATH 00:35:09.796 21:30:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.796 21:30:47 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:09.796 21:30:47 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:09.796 21:30:47 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717875047.XXXXXX 00:35:09.796 21:30:47 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717875047.451xXf 00:35:09.796 21:30:47 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:09.796 21:30:47 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:35:09.796 21:30:47 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:09.796 21:30:47 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:09.796 21:30:47 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:09.796 21:30:47 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:09.796 21:30:47 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:35:09.796 21:30:47 -- common/autotest_common.sh@10 -- $ set +x 00:35:09.796 21:30:47 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:35:09.796 21:30:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:09.796 21:30:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:09.796 21:30:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:09.796 21:30:47 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:09.796 21:30:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:09.796 21:30:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:09.796 21:30:47 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:09.796 21:30:47 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:09.796 21:30:47 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:09.796 21:30:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:09.796 + [[ -n 2057429 ]] 00:35:09.796 + sudo kill 2057429 00:35:09.807 [Pipeline] } 00:35:09.825 [Pipeline] // stage 00:35:09.830 [Pipeline] } 00:35:09.846 [Pipeline] // timeout 00:35:09.851 [Pipeline] } 00:35:09.867 [Pipeline] // catchError 00:35:09.872 [Pipeline] } 00:35:09.890 [Pipeline] // wrap 00:35:09.896 [Pipeline] } 00:35:09.912 [Pipeline] // catchError 00:35:09.921 [Pipeline] stage 00:35:09.923 [Pipeline] { (Epilogue) 00:35:09.937 [Pipeline] catchError 00:35:09.939 [Pipeline] { 00:35:09.953 [Pipeline] echo 00:35:09.954 Cleanup processes 00:35:09.960 [Pipeline] sh 00:35:10.247 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:10.247 2641498 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:10.262 [Pipeline] sh 00:35:10.548 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:10.548 ++ grep -v 'sudo pgrep' 00:35:10.548 ++ awk '{print $1}' 00:35:10.548 + sudo kill -9 00:35:10.548 + true 00:35:10.562 [Pipeline] sh 00:35:10.849 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:23.101 [Pipeline] sh 00:35:23.386 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:23.386 Artifacts sizes are good 00:35:23.401 [Pipeline] archiveArtifacts 00:35:23.408 Archiving artifacts 00:35:23.671 [Pipeline] sh 00:35:23.958 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:23.975 [Pipeline] cleanWs 00:35:23.985 [WS-CLEANUP] Deleting project workspace... 00:35:23.985 [WS-CLEANUP] Deferred wipeout is used... 00:35:23.993 [WS-CLEANUP] done 00:35:23.995 [Pipeline] } 00:35:24.015 [Pipeline] // catchError 00:35:24.027 [Pipeline] sh 00:35:24.315 + logger -p user.info -t JENKINS-CI 00:35:24.326 [Pipeline] } 00:35:24.343 [Pipeline] // stage 00:35:24.349 [Pipeline] } 00:35:24.365 [Pipeline] // node 00:35:24.371 [Pipeline] End of Pipeline 00:35:24.448 Finished: SUCCESS